Analog computers are physical devices that work on continuous data. They were mainly used in the 1970s for performing complex calculations and processing analog data. They contain functional units such as comparators, multipliers, and function generators, which engineers use to input data such as pressure, temperature, voltage, and speed.
Analog computers can perform actions on real numbers using non-deterministic logic. As a result, performing complex and continuous functions in analog systems is much easier than in digital systems. However, they also accept errors more readily than digital computers.
Why did the analog system become obsolete?
Since these computers had mechanical levers, engineers had to physically change the circuit (by shifting pedals, op-amps, and multipliers) to perform different operations. They also had error-prone inputs such as voltage and pulse frequency, which were affected by background noise. Furthermore, the inputs themselves were noisy, which further amplified the error in the system.
None of those things is a problem for digital computers, which explains why analog systems became obsolete once digital devices started to take off. Who would want a system likely to commit errors when they could use a far more efficient and precise one?
Why are they coming back?
Even though analog systems were replaced with digital computers that used simple input devices such as mice and keyboards, they seem to be making a comeback. The real reason for that is that since computers today use and generate large amounts of data, using digital computers with von Neuman architecture causes bottlenecks in the memory.
This is because the system needs to convert the incoming data into binary before the motherboard can process it. The interface of the memory and processor is where the bottleneck happens.
Analog computers, for their part, process data in memory, which means they can process data directly without converting it into 0s and 1s or into any other form of machine language. Instead of using transistors, analog systems rely on resistors to perform computations. Once the calculations are done, the final output can be converted to digital format.
This significantly reduces the number of analog to digital conversions (ADC) required for a process, leading to faster results and improved performance. Furthermore, analog setups don’t need to perform all the calculations in one cycle. Instead, they can do multiple partial calculations, which can then be used to create the final result. This further improves the efficiency of the system.
Analog devices usually have a high mean time between failure (MTBF). For example, this analog device has an MTBF of 30000 hrs, i.e., it can function 300,000 hrs before suffering a catastrophic failure. For specific operations, analog systems are faster than digital systems. They are thus useful for processing large waveform datasets such as nuclear pulse or super collision event data. Digital computers can’t handle such a workload.
Today, engineers typically use integrated circuits to program analog systems’ crossbars. Through crossbar rearrangement, analog systems can perform multiple operations, which is different from older systems that could only perform a limited number of functions. The newer systems also don’t need manual patching and can perform differential, advanced scientific calculations.
Analog computing – industry use cases
Today, many businesses use neural networks and deep learning algorithms to extract insights from their data. At first, companies used GPUs instead of traditional CPUs for data modeling and standardization.
But training models on GPUs take a lot of time. The latest optimization in hardware for neural network processing is TPU (Tensor processing unit), an integrated circuit specifically developed for training neural networks.
But even after implementing TPUs, the final modeling is too slow. Due to this, some companies are looking into analog computers for neural network modeling. They are faster and more performance-oriented than TPUs for certain tasks. Even though they have some issues (programming analog systems is tough, and they are more prone to noise errors), they are very efficient for dealing with huge datasets such as it happens with image recognition and speech processing.
For deterministic processes such as neural networks, which can work with modest precision, analog systems are a great fit.
Most sensors used today are analog, so they need an analog system to process their data. They also use memristors, which remember the value long enough to be continuous. Furthermore, they only use the ADC converter to display the final result. Companies can use such devices to create autonomous devices/robots/machines that can perform low-level tasks continuously without human supervision.
While supercomputers are necessary for performing long and complex calculations, they use up a lot of electrical energy. For example, Tianhe-1A, a supercomputer with a computing power of 2.5 petaflops, requires 4.04 MW of energy. This is a huge amount of electricity because, on average, 1 megawatt of electricity can power 800 homes for a year.
This means that even though supercomputers can perform complex calculations at high speeds, the time-energy trade-off is too high for them to be useful.
Analog computers work on the same principle that a human mind works. They take data from other analog chips and use it to perform their calculations instead of accessing it from memory. Furthermore, instead of a 32-bit multiplier, analog systems use 1-bit analog multipliers to perform the same operation. This allows the system to increase its efficiency while decreasing the power dissipation.
Even though you have to manually perform circuit changes to change a function, you can easily scale this system for performance and power (since the parallel computing components also scale). Any problem with parallel operations will perform better in an analog computing environment. Analog systems can be the answer to supercomputers’ power issues.
The future
Today, many companies are working on hybrid systems, i.e., digital computers with an inbuilt analog relay. These systems can process both continuous and discrete data. Instead of using programming elements such as loops and algorithms, they use interconnects to create the required electric analog.
These systems are fast, reliable, and efficient. Yet, even though many hybrid systems are used today for specialized purposes (fuel processors, heart monitors), they are yet to be fully explored. And there’s a big reason for that.
Reintroducing analog computing today is a huge architectural and financial challenge. Many companies aren’t ready to invest in redesigning their current infrastructure on such a large scale. They would also have to invest in training and certification as analog systems have been out of the market for too long.
Even though there is a huge demand and market for analog computing, solutions would take time to develop. The change from digital to hybrid has begun. A transition of thought process is bound to happen soon. After all, the world is analog, not digital.