Any computer science textbook begins with a list of generations of computer science. First generation computers were based on electronic lamps. They were replaced by second-generation computers made of transistors.
Integral microcircuits have allowed to construct computers of the third, and microprocessors – the fourth generation. On this computer history suddenly comes to an end, and we hang in a strange atemporality, where nothing happens. More than three decades have passed, but the fifth generation has not yet arrived.
This is especially strange against the background of what has been happening recently. The state of affairs in the computer industry is changing faster and more significantly than ever before. The usual ways of classifying computing devices are gradually losing touch with reality. Even the inviolability of Microsoft’s or Intel’s position has begun to be questioned.
I have a hypothesis explaining what happened. Thirty years of the fourth generation have put our guard down. In fact, we are on the threshold of the fifth generation and do not notice it – we are used to it.
The most obvious sign of a change of generations – another elementary base – has failed us. This time the element base will not change. However, it has never been the only feature that distinguishes one generation from another. There are others.
In the computing devices that have appeared in recent years, there are common features that are not characteristic of computers over the past thirty years. Other priorities, different approach to security, different approach to interfaces, different approach to multitasking, different approach to applications, different everything.
The security model was inherited by modern personal computers from multiuser computers of the seventies. Multi-user in the truest sense of the word: one computer served terminals that were simultaneously operated by multiple users. The order was watched by the system administrator who, unlike users, had access to any file and any program.
Nowadays, the vast majority of computers use only one person, and, as a rule, without the help of sysadmin. The main danger for the computer is not people, but programs. By installing applications, the user can only hope that they are doing exactly what is needed. And if not? Any program has access to all user data and any hardware resources. It can do almost anything with a computer. It is almost impossible to limit it.
The traditional methods of strengthening of the safety transferred on the soil unusual to them, give some madness. To take at least the requirement to enter the administrative password before performance of potentially dangerous actions is a split personality! And not very successful: an incompetent user does not become smarter if you force him to enter the administrator’s password. The authors of malicious programs know this very well and use it.
What is the difference between the security model, which is based on the way computers are used now and not forty years ago? First of all, it will be based on the understanding that the user is alone, and sysadmins are found only in fairy tales (and large corporations).
Second, any application, including a friendly one, should be considered a potential enemy. The fact that the user has installed it does not mean that any arbitrary data or hardware resources can be trusted to the application.
This is the basis for security protection in mobile devices using Android and iOS. Applications are run in isolated “sandboxes” and are unable to influence what is beyond their reach. Each of them requires a separate permission (in Android permissions are issued during the installation of the program, in iOS are requested as it works, but the essence of this does not change).
Like the outdated security model, the accepted order of allocation of resources is another relic of the seventies. Running processes share processor cycles, network access, and other computer capabilities as if they are no more different than those of users who sit behind EU terminals. And this, as we understand it, is not the case for a long time.
A modern personal computer is a theatre with one viewer. If resources are limited, then, distributing them, the machine must be guided by a single goal: to make sure that all available opportunities are directed to what the user is currently doing. Whatever happens behind the scenes, the presentation should not stop for a minute.
An example of such an approach – this time not for sublime ideological reasons, and forced – is again a mobile device of recent years. They necessarily have to protect processor cycles: not only that them hardly suffice for a life so they also spend the battery. Virtual memory with infinite swap is also an impermissible luxury for smartphones and tablets.
The solution found by Android, iOS and Windows RT developers is known. Launching and shutting down the programs is now controlled by the system itself. Inactive applications can be unloaded from memory at any time to free up resources for the task with which the user works.
Developers should make sure that the user does not notice anything, and use special software interfaces to work in the background.
Although the result is far from ideal, it is still impressive. Mobile devices manage to react to user commands (or at least create an illusion of reaction) faster than multiply more powerful personal computers.
At the heart of any modern mobile platform is one of the operating systems used on ordinary personal computers – Linux, BSD or even Windows. The difference is in the additional level of abstraction, which removes the user’s concerns about specifics.
One of these features is the files. Building a multi-level hierarchy of catalogues and organizing documents into them is a task that was within the power of engineers or scientists who have worked with computers in the past.
However, it exceeds both the needs and capabilities of the hundreds of millions of non-experts using computers now. This will be confirmed by anyone who has seen a PC for at least a couple of months by a person who has not been technically proficient.
There are still files in the depths of iOS or Android, but they are hidden from the user. Access and exchange of them is given to the applications. A text editor will find, show, and open text documents that were edited using it, rather than sending the user on a journey across the disk.
The music player will showcase the music library and make sure that the music is not mixed up with movies and books – they have their own programs. It’s a pity to lose the hierarchy of catalogues, but a decent search and rich metadata replace it well.
A pleasant side effect of this approach is the disappearance of the annoying notion of “unsaved file”. Forcing the user to manually save the data is another atavism that has survived from those sad times when disks were small and very slow.
Nowadays, most types of documents can be saved and recovered in a fraction of a second – and not just for fun, but in all possible versions. So why not do it?