pacman 5.0 disponibilizado

Leandro Inácio escreveu:

A versão 5.0 do pacman foi disponibilizada, trazendo o tão sonhado hooks e buscas de arquivos de pacotes na base de dados diretamente do repositório, além do refatoramento do makepkg e criação da libmakepkg.

O Allan também solicita ajuda no refatoramento e melhoria do makepkg.

Mais detalhes neste link.

Filmes e séries para você que é fã de tecnologia como eu

Olá 2016, tudo bom?

Voltamos a escrever nesse blog hiatico, inventei essa palavra agora, para compartilhar mais coisas bacanas.

Você gosta de Tecnologia? Informática? Ativismo? Então esse post é justamente para você que gosta de assistir filmes e séries que tratam deste assunto. Sim, irei indicar uma série de filmes/documentário/séries para quem gosta do assunto mas também é útil para pessoas que desejam saber e conhecer mais sobre esse maravilhoso mundo da rede mundial de computadores.

Os filmes e séries indicados podem ser encontrado em plataformas de streaming ou você pode baixá-los e assisti-los. Pode conter algo que você já viu ou faltar algo. No caso de faltar fique à vontade em escrever para incluir na lista e além de mim outras pessoas poderem assistir.

E não, não vai ter Piratas do Vale do Silício aqui e muito menos uma ordem cronológica de ano.

Essa lista pode ser atualizando com frequência ou não. Caso tenha uma indicação é só deixar nos comentários que irei acrescentar. Isso foi o que lembrei no momento, vejam e indiquem aos seus amigos.

Não significa que existe legenda pt-br para todas essas indicações, mas vale a pena assistir elas ou pelo menos tentar.

Espero que tenham gostado desta lista, até a próxima.

 


PHP 7.0 pacotes disponibilizados

Leandro Inácio escreveu:

Pacotes da nova versão maior do PHP foram disponibilizados nos repositórios estáveis. Além das novas features do PHP 7 existem as seguintes mudanças no empacotamento. No geral as configurações dos pacotes estão mais próximas do que foi entregue pelo projeto PHP. Além disso, consulte o guia de migração PHP 7 para melhorias.

Pacotes removidos

  • php-pear
  • php-mssql
  • php-ldap o módulo está incluso no pacote php
  • php-mongo O novo php-mongodb pode prover uma alternativa embora não seja um substituto drop-in compatível
  • php-xcache Considere o uso do OPcache e opcionalmente APCu para o cache de informações do usuário
  • graphviz As ligações (bindings) do PHP tiveram que ser removidos

Pacotes novos

Mudanças na configuração

  • open_basedir não é mais definido por padrão
  • As extensões openssl, phar e posix são agora embutidas
  • php-fpm não fornece mais uma configuração logrotate. Ao invés usa syslog/journald por padrão
  • Arquivos de serviço do php-fpm deixaram de habilitar PrivateTmp=true
  • A configuração e módulo do php-apache foi renovemado para php7_module.conf e libphp7.so

URL da notícia: https://www.archlinux.org/news/php-70-packages-released/

Conquistas de 2015

Bom, uma lista de coisas que eu queria fazer e consegui fazer esse ano:

Level 20 no jogo Tibia

Há muito tempo (desde adolescência) eu queria um personagem pally lvl 20 no jogo Tibia. E esse ano, eu finalmente consegui:

shot-2015-03-09_02-12-19Cosplay do personagem coringa

Há os meus amigos que acreditavam que o cosplay do coringa não ia ficar legal, mas eu não achei:

censuradoEmprego legal

Quando eu crescesse eu gostaria de trabalhar (mesmo que de graça) com uma rede decentralizada livre ou segurança de aplicativos no Linux (que tivesse um isolamento maior entre os processos, mas fácil de usar como o Android, onde você dá permissão para cada recurso), então é legal me juntar com um time que já queria fazer uma dessas coisas e ainda ser pago por isso.

100 posts

Uma coisa que eu queria era ter 100 posts publicados nesse blog. Finalmente eu consegui. =) Agora eu posso dizer que meus objetivos com esse blog foram “cumpridos” e eu vou postar com menos frequência, só por diversão.

O que ficou de fora

  • Boost.Http não foi aceita esse ano, então vai ter que ficar para o próximo.
  • Diploma ficou de fora também. E estou querendo mudar de faculdade, então vai demorar mais (provavelmente).

Arquivado em:pt-BR Tagged: 2015, off, tibia

Novo mirror do Arch Linux em Joanesburgo, África do Sul

Estêvão Valadão Teixeira escreveu:

É com prazer que informamos que está no ar mais um mirror do projeto Arch Linux Brasil. Este mirror está hospedado no data center da Host1Plus em Joanesburgo, oferecendo assim grande velocidade e disponibilidade aos usuários localizados no continente africano.

Estamos muito felizes por poder ajudar na expansão do Arch Linux pelo mundo e este é um passo muito importante, visto que existem poucos mirrors na África. É um passo ainda mais importante para o projeto Arch Linux ARM, pois este será o primeiro mirror deles na região! :)

O mirror pode ser acessado diretamente em http://archlinux-za.mirror.host1plus.com.

Caso você esteja de férias por lá e queira um mirror rápido ;), adicione a linha abaixo no seu /etc/pacman.d/mirrorlist:

Server = http://archlinux-za.mirror.host1plus.com/$repo/os/$arch

Registro aqui novamente o nosso agradecimento à Host1Plus, por fornecer mais um servidor e por continuar apoiando o Arch Linux globalmente!

Hack ‘n’ Cast v0.17 - Introdução ao Bitcoin

Como confiar riquezas a algo intangível e da qual não temos certeza nem de quem é seu criador?

Baixe o episódio e leia o shownotes

Removendo o Plasma 4

Leandro Inácio escreveu:

Desde que o desktop KDE 4 vem sendo descontinuado por vários meses e está crescendo a dificuldade em manter duas versões do Plasma, nós estaremos removendo-o dos nossos repositórios. Plasma 5.5 vem sendo disponibilizado e deve estar estável o bastante para substituí-lo.

Instalações KDE 4 não irão ser atualizadas automaticamente para o Plasma 5. Contudo, recomendamos a todos usuários a atualizar ou mudar para uma alternativa atualizada o mais breve possível, desde que em um futuro próximo qualquer update pode quebrar o desktop KDE 4 sem aviso prévio. Veja a wiki para instruções de como atualizar para o Plasma 5.

URL da notícia: https://www.archlinux.org/news/dropping-plasma-4/

Mudança na ABI do C++

Leandro Inácio escreveu:

GCC 5.x contém libstdc++ com suporte a dual ABI e nós temos agora que mudar para a nova ABI.

Enquanto a velha ABI do C++ estiver disponível, é recomendado que você recompile todos os pacotes não oficiais para utilizar a nova ABI. Isto é extremamente importante se eles linkam para outra biblioteca de construção ao invés da nova ABI. Você pode conseguir uma lista de pacotes a serem recompilados usando o seguinte shell script:

#!/bin/bash
while read pkg; do
    mapfile -t files < <(pacman -Qlq $pkg | grep -v /$)
    grep -Fq libstdc++.so.6 "${files[@]}" 2>/dev/null && echo $pkg
done < <(pacman -Qmq)

(Texto do anúncio original por Allan McRae [link])

public/2015-December/027597.html

URL da notícia: https://www.archlinux.org/news/c-abi-change/

Multitasking styles, event loops and asynchronous programming

There was a text that I’ve previously published here in this blog about asynchronous programming, but it was written in Portuguese. Now has come the time that I think this text should be available also in English. Translation follows below. Actually is not a “translation” per si, as I adapted the text to be more pleasant when written in English.

One of the subjects that interest me the most in programming is asynchronous programming. This is a subject that I got in touch since I started to play with Qt around 2010, but I slowly moved to new experiences and “paradigms” of asynchronous programming. Node.js and Boost.Asio were other important experiences worth mentioning. The subject caught me a lot and it was the stimulus for me to study a little of computer architecture and operating systems.

Motivation

Several times we stumble upon with problems that demand continuously handling several jobs (e.g. handling network events to mutate local files). Intuitively we may be tempted to use threads, as there are several “parallel” jobs. However, not all jobs executed by the computer are done so exclusively in the CPU.

There are other components beside the CPU. Components that are not programmable and that do not execute your algorithm. Components that are usually slower and do other jobs (e.g. converting a digital signal into an analog one). Also, the communication with these components usually happen serially, through the fetch-execute-check-interrupt cycle. There is a simplification in this argument, but the fact that you don’t read two different files from the same hard drive in parallel remains. Summarizing, using threads isn’t a “natural” abstraction to the problem, as it doesn’t “fit” and/or design the same characteristics. Using threads can add a complex and unnecessary overhead.

“If all you have is a hammer, everything looks like a nail”

Another reason to avoid threads as an answer to the problem is that soon you’ll have more threads than CPU/cores and will face the C10K problem. Even if it didn’t add an absurd overhead, just the fact that you need more threads than available CPUs will make your solution more restrict, as it won’t work on bare-metal environments (i.e. that lacks a modern operating system or a scheduler).

A great performance problem that threads add comes from the fact that they demand a kernel context-switch. Of course this isn’t the only problem, because there is also the cost involved in creating the thread, which might have a short lifetime and spend most of its lifetime sleeping. The own process of creating a thread isn’t completely scalable, because it requires the stack allocation, but the memory is a global resource and right here we have a point of contention.

The performance problem of a kernel context-switch reminds the performance problem of the function calling conventions faced by compilers, but worse.

Functions are isolated and encapsulated units and as such they should behave. When a function is called, the current function doesn’t know which registers will be used by the new function. The current function doesn’t hold the information of which of the CPU registers will be overridden during the new function lifetime. Therefore, the function calling conventions add two new points to do extra processing. One point to save state into the stack and one to restore. That’s why some programmers are so obsessed into function inlining.

The context-switch problem is worse, because the kernel must save the values of all registers and there is also the overhead of the scheduler and the context-switch itself. Processes would be even worse, as there would be the need to reconfigure the MMU. This multitasking style is given the name of preemptive multitasking. I won’t go into details, but you can always dig more into computer architecture and operating systems books (and beyond).

It’s possible to obtain concurrency, which is the property to execute several jobs in the same period of time, without real parallelism. When we do this tasks that are more IO-oriented, it can be interesting to abandon parallelism to achieve more scalability, avoiding the C10K problem. And if a new design is required, we could catch the opportunity to also take into account cooperative multitasking and obtain a result that is even better than the initially planned.

The event loop

One approach that I see, and I see used more in games than anywhere else, is the event loop approach. It’s this approach that we’ll see first.

There is this library, the SDL low level library, whose purpose is to be just a multimedia layer abstraction, to supply what isn’t already available in the C standard library, focusing on the game developer. The SDL library makes use of an event system to handle communication between the process and the external world (keyboard, mouse, windows…), which is usually used in some loop that the programmer prepares. This same structure is used in other places, including Allegro, which was the biggest competitor of SDL in the past.

The idea is to have a set of functions that make the bridge of communication between the process and the external world. In the SDL world, events are described through the non-extensible SDL_Event type. Then you use functions like SDL_PollEvent to receive events and dedicated functions to initiate operations that act on the external world. The support for asynchronous programming in the SDL library is weak, but this same event loop principle could be used in a library that would provide stronger support for asynchronous programming. Below you can find a sample that makes use of the SDL events:

There are the GUI libraries like GTK+, EFL and Qt which take this idea one step further, abstracting into an object the event loop that were previously written and rewritten by you. The Boost.Asio library, which focuses on asynchronous operations and not on GUIs, has a class of similar purpose, the io_service class.

To remove your need to write boilerplate code to route events to specific actions, the previously mentioned classes will handle this task for you, possibly using callbacks, which are an old abstraction. The idea is that you should bind events to functions that might be interested in handling those events. Now the logic to route these events belong to the “event loop object” instead a “real”/raw event loop. This style of asynchronous programming is a passive style, because you only register the callbacks and transfer the control to the framework.

Now that we’re one level of abstraction higher than event loops, let’s stop the discussion about event loops. And these objects that we referring to as “event loop objects” will be mentioned as executors from now on.

The executor

The executor is an object that can execute encapsulated units of work. Using only C++11, we can implement an executor that schedules operations related to waiting some duration of time. There is the global resource RTC and, instead of approaching the problem by creating several threads that start blocking operations like the sleep_for operation, we’ll use an executor. The executor will schedule and manage all events that are required. You can find a simple implementation for such executor following:

This code reminds me of sleepsort.

In the example, without using threads, it was possible to execute the several concurrent jobs related to waiting time. To do such thing, we gave the executor the responsibility to share the RTC global resource. Because the CPU is faster than the requested tasks, only one thread was enough and, even so, there was a period of time for which the CPU was idle.

There are some concepts to be extracted from this example. First, let’s consider that the executor is an standard abstraction, provided in some interoperable way among all the code pieces that makes use of asynchronous operations. When a program wants to do the wait asynchronous operation, the program request the operation start to some abstraction — in this case it’s the executor itself, but it’s more common to find these operations into “I/O objects” — through a function. The control is passed to the executor — through the method run — which will check for notification of finished tasks. When there are no more tasks on the queue, the executor will give back the control of the thread.

Because there is only one thread of execution, but several tasks to execute, we have the resource sharing problem. In this case, the resource to be shared is the CPU/thread itself. The control should go and come between some abstraction and the user code. This is the core of cooperative multitasking. There are customization points to affect behaviour among the algorithms that execute the tasks, making them give CPU time to the execution of other tasks.

One advantage of the cooperative multitasking style of multitasking is that all “switch” points are well defined. You know exactly when the control goes from one task to another. So that context switch overhead that we saw earlier don’t exist — where all register values need to be saved… A solution that is more elegant, efficient and green.

The object that we passed as the last argument to the function add_sleep_for_callback is a callback — also know as completion handler. Think about what would happen if a new wait operation was requested within one of the completion handlers that we registered. There is an improved version of the previous executor following:

This implementation detail reminds me of the SHLVL SHELL variable.

An interesting case is the case from the JavaScript language. JavaScript has a kind of “implicit executor”, which is triggered when the VM reaches the end of your code. In this case, you don’t need to write codes like “while (true) executor.run_one()” or “executor.run()“. You only would need to register the callbacks and make sure there are no infinite loops until the executor gets the chance to be in control.

With the motivation introduced, the text started to decrease the use of mentions to I/O for reasons of simplicity and focus, but keep in mind that we use asynchronous operation mostly to interact with the external world. Therefore many operation are scheduled conditionally in response to the notification of the completion of a previous task (e.g. if protocol invalid, close socket else schedule another read operation). Proposals like N3785 and N4046 define executors also to schedule thread pools, not only timeouts within a thread. Lastly it’s possible to implement executors that schedule I/O operations within the same thread.

Asynchronous algorithms represented in synchronous manners

The problem with the callback approach is that we no longer have a code that is clean and readable. Previously, the code could be read sequentially because this is what the code was, a sequence of instructions. However, now we need to spread the logic among lots of lots of callbacks. Now you have blocks of code that are related far apart from each other. Lambdas can help a little, but it’s not enough. The problem is know as callback/nesting hell and it’s similar to the spaghetti code problem. Not being bad enough, the execution flow became controverted because the asynchronous nature of the operations itself and constructs like branching and repetition control structures and even error handling assume a representation that are far from ideal, obscures and difficult to read.

One abstraction of procedures very important to asynchronous programming is coroutines. There is the procedure abstraction that we refer to under the name of “function”. This so called “function” models the concept of subroutine. And we have the coroutine, which is a generalization of a subroutine. The coroutine has two more operations than subroutine, suspend and resume.

When your “function” is a coroutine — possible when the language provides support to coroutines — it’s possible to suspend the function before it reaches the end of execution, possibly giving a value during the suspend event. One example where coroutines are useful is on a hypothetical fibonacci generating function and a program that uses this function to print the first 10 numbers of this infinite sequence. The following Python code demonstrate an implementation of such example, where you can get an insight of the elegance, readability and and reuse that the concept of coroutines allows when we have the problem of cooperative multitasking:

This code reminds me of the setjmp/longjmp functions.

One characteristic that we must give attention in case you aren’t familiarized with the concept is that the values of the local variables are preserved among the several “calls”. More accurately, when the function is resumed, it has the same execution stack that it had when it was suspended. This is a “full” implementation of the coroutine concept — sometimes mentioned as stackful coroutine. There are also stackless coroutines where only the “line under execution” is remembered/restored.

The N4286 proposal introduces a new keyword, await, to identify a suspension point for a coroutine. Making use of such functionality, we can construct the following example, which elegantly defines an asynchronous algorithm described in a very much “synchronous manner”. It also makes use of the several language constructs that we’re used to — branching, repetition and others:

Coroutines solve the complexity problem that asynchronous algorithms demand. However, there are several coroutines proposals and none of them was standardized yet. An interesting case is the Asio library that implemented a mechanism similar to Duff’s Device using macros to provide stackless coroutines. For C++, I hope that the committee continue to follow the “you only pay for what you use” principle and that we get implementations with high performance.

While you wait for the standardization, we can opt for library-level solutions. If the language is low-level and gives the programmer control enough, these solutions will exist — even if it’s going to be not portable. C++ has fibers and Rust has mioco.

Another option to use while you wait for coroutines it not to use them. It’s still possible to achieve high performance implementation without them. The big problem will be the highly convoluted control flow you might get.

Completion tokens

While there is no standardized approach for asynchronous operations in the C++ language, the Boost.Asio library, since the version 1.54, adopted an interesting concept. The Boost.Asio implemented an extensible solution. Such solution is well documented in the N4045 proposal and you’ll only find a summary here.

The proposal is assumes that the callback model is not always interesting and can be even confusing sometimes. So it should be evolved to support other models. Now, instead receiving a completion handler (the callback), the functions should receive a completion token, which adds the necessary customization point to support other asynchronous models.

The N4045 document uses a top-down approach, first showing how the proposal is used to then proceed to low-level implementation details. You can find a sample code from the document following:

In the code that you just saw, every time the variable yield is passed to some asynchronous operation (e.g. open and read), the function is suspended until the operation is completed. When the operation completes, the function is resumed in the point where it was suspended and the function that started the asynchronous operation returns the result of the operation. The Fiber library, shortly mentioned previously, provides a yield_context to the Asio extensible model, the boost::fibers::asio::yield. It’s asynchronous code written in a synchronous manner. However, an extensible model is adopted because we don’t know which model will be the standard for asynchronous operations and therefore we cannot force a single model to rule them all.

To build an extensible model, the return type of the function needs to be deduced (using the token) and the value of the return also needs to be deduced (also using the token). The return type is deduced using the token type and the returned value is created using the token passed as argument. And you still have the handler, which must be called when the operation completes. The handler is extracted from the token. The completion tokens model makes use of type traits to extract all information. If the traits aren’t specialized, the default behaviour is to treat the token as a handler, turning the approach compatible with the callback model.

Several examples are given in the N4045 document:

  • use_future
  • boost::fibers::asio::use_future
  • boost::fibers::asio::yield
  • block

The std::future approach have meaningful impact on performance and this is not cool, like explained in the N4045 document. This is the reason why I don’t mention it in this text.

Signals and slots

One alternative that was proposed to the model of callbacks is the signals and slots approach. This approach is implemented in libgsigc++, Boost, Qt and a few other libraries.

This proposal introduces the concept of a signal, which is used to notify an event, but abstracts the delivery of the notification and the process of registering functions that are interested in the event. The code that notifies events just need to worry about emitting the signal every time the event happens because the signal itself will take care of handling the set of registered slots and stuff.

This approach usually allows a very decoupled architecture, in opposition of the very verbose approach largely used in Java. An interesting effect, depending on the implementation, is the possibility to connect one signal to another signal. It’s also possible to have multiple signals connected to one slot or one signal connected to multiple slots.

The signal usually is related to one object, and when the object is destroyed, the connections are destroyed too. Just like it’s also possible to have slot objects that automatically disconnect from connected signals once destroyed, so you have a safer abstraction.

Given signals are independently implemented abstractions and usable as soon as they are exposed, it’s naturally intuitive to remove the callback argument from the operations that initiate asynchronous operations to avoid duplication of efforts. If you go further in this direction, you’ll even remove the own function to do asynchronous operations, exposing just the signal used to receive notifications, your framework you’ll be following the passive style instead active style. Examples of such style are the Qt’s socket, which doesn’t have an explicit function to request the start of the read operation, and the POCO library, which doesn’t have a function to request that receiving of a HTTP request.

Another detail which we have in signals and slots approach is the idea of access control. In the Qt case, signals are implemented in a way that demands the cooperation of one preprocessor, the own Qt executor and the QObject class. In the Qt case, the control access rules for emitting a signal follow the same rules for protected methods from C++ (i.e. all children classes can emit signals defined in parent classes). The operation of connecting a signal to another signal or a slot follows the same rules of public members of C++ (i.e. anyone can execute the operation).

In the case of libraries that implement the signal concept as a type, it’s common to observe a type that encapsulate both, the operation to emit the signal and the operation to connect the signal to some slot (different from what we see in the futures and promises proposal, where each one can have different control access).

The signals and slots approach is cool, but it doesn’t solve the problem of complexity that is solved with coroutines. I only mentioned this approach to discuss better the difference between the active style and the passive style.

Active model vs passive model

In the passive model, you don’t schedule the start of the operations. It’s what we commonly find in “productive” frameworks, but there are many questions that this style doesn’t answer quite well.

Making a quick comparison between the libraries Qt and Boost.Asio. In both libraries, you find classes to abstract the socket concept, but using Qt you handle the event readyRead using the readAll method to receive the buffer with the data. In contrast, using Boost.Asio, you start the operation async_read_some and pass the buffer as argument. Qt uses the passive style and Boost.Asio uses the active style.

The readyRead event, from Qt, acts independently from the user and requires a buffer allocation every time it occurs. Then some questions arise. “How can I customize the buffer allocation algorithm?”, “how can I customize the application to allow recycling buffers?”, “how do I use a buffer that is allocated on the stack?” and more. The passive model doesn’t answer questions like these, then you need to fatten the socket abstraction with more customization points to allow behaviours like the ones described. It’s a combinatorial explosion to every abstraction that deals with asynchronous operations. In the active model, these customizations are very natural. If there is any resource demanded by the operation that the developer is about to start, the developer just need to pass the resource as an argument. And it’s not only about resource acquisition. Another example of question that the passive model doesn’t answer well is “how do I decide whether I’m gonna accept a new connection or postpone to when the server is less busy?”. It’s a great power to applications that are seriously concerned about performance and need fine adjustments.

Besides performance and fine tuning, the passive model is also causes trouble to debugging and tests thanks to the inversion of control.

I must admit that the passive model is quite good to rapid prototyping and increasing productive. Fortunately, we can implement the passive model on top of the active model, but the opposite is not so easy to do.

Bonus references

If you like this subject and want to dig more, I suggest the following references:


Arquivado em:computação, en Tagged: C++, javascript, programação, Python, Qt

Xorg 1.18.0 entra no [testing]

Leandro Inácio escreveu:

Xorg 1.18.0 está no [testing] com as seguintes mudanças:

  • Vocô pode escolher entre xf86-input-evdev e xf86-input-libinput.
  • xf86-input-aiptek não será atualizado e será removido do xorg-1.18.0 quando entrar no [extra].

Pendências:

Drivers da NVIDIA ainda não estão compatíveis com 1.18.0. Você pode bloquear a atualização adicionando --ignoregroup=xorg ao comando pacman ou adicionando 'xorg' ao IgnoreGroup no pacman.conf.

URL da notícia: https://www.archlinux.org/news/xorg-1180-enters-testing/