The slowdown in the rate at which CPU speed is increasing has became noticeable as we’ve approached quantum sizes for transistors, where things behave differently, e.g. when the tunneling effect kicks in. There is enough written on this topic already.
While alternatives to silicon chips are being actively researched, the uncertainty remains, at least for now. Relying on exponential hardware growth still introduces a valid business risk. Many businesses, however, are still implicitly and sometimes explicitly relying on Moore’s Law.
Now there is a high probability that hardware speed is not going to be aligned with business growth for free anymore. This means that the technology and processes behind application stacks should be revisited. Wasting CPU time will hardly be justified financially anymore. For example, using high-level dynamic languages for everything or ignoring computation complexity when designing algorithms will cost a lot more when projected onto two to four years of business operation.
The selection of a programming language is already highly dependent on the efficiency of the concurrency model. This is caused by the shift towards multicore architectures.
The problem is that many developers are experienced in one or many high-level languages. It will not be immediately possible for them to start using a lower-level language, such as C, where they have to start caring about memory safety, build internals, etc.
Fast language for everyone
In the context of the current situation, Go language is gaining popularity while taking care of one part of the problem. It’s a very efficiently compiled language with garbage collection, a super simple build toolset and an easy to use concurrency model that abstracts architecture details allowing it to scale transparently on multicore systems. Many developers are actually switching to Golang from higher-level dynamic languages.
Assisted application optimization
The other part of the problem is application. All the benefits of a fast language performance are easily canceled out by an exponential time algorithm, inefficient memory usage or I/O. Automatic profiling tools are needed to help developers proactively find and resolve algorithm problems. Moreover, they should guide developers when deciding what to optimize first and where exactly. Go comes with a built-in interactive performance profiling toolset, pprof, which is a good starting point.
At StackImpact, we’ve designed a production profiler that automates production profiling for developers empowering them to create efficient applications. The blog post Profiling Go Applications in Production explains it in more detail.