Ready to learn Machine Learning? Browse courses like Machine Learning Foundations: Supervised Learning developed by industry thought leaders and Experfy in Harvard Innovation Lab.
Cognitive Brain Cloud Computing coupled with Blockchain Based security models is having a moment using latest technical jargon in IT world. Artificial neural network algorithms like deep learning, which are very loosely based on the way the human brain operates, now allow digital computers to perform such extraordinary feats as translating the language, hunting for subtle patterns in huge amounts of data, and beating the best human players at Go. But even as engineers continue to push this mighty computing strategy, the energy efficiency of digital computing is fast approaching its limits. Current Data-Centers & supercomputers already draw megawatts — some 5-10% of the electricity consumed in the USA goes to Data centers alone. The human brain, by contrast, runs quite well on about 20-30 watts, which represents the power produced by just a fraction of the food a person eats each day. If we want to keep improving computing, we will need our computers to become more like our brains and using each neuron as a Nano-bots allowing them to learn from each other until fully autonomous network is fully developed within a human brain, allowing block-chain to be integrated as major building block to protect against any type of hacking either now or near future. In addition, Brain-based computing is a step closer to what Elon Musk proposed mid last year.
In addition, the recent focus on neuromorphic technology, which promises to move computing beyond simple neural networks and toward circuits that operate more like the brain’s neurons and synapses do. The development of such physical brain-like circuitry is actually pretty far along. Many research work across the globe has led to the development of what I call artificial neural networks like synapses and dendrites which can now respond to digital signal like never before. It opens an opportunity which was far-fetched to understand a few decades ago. The question is what would it take to integrate so many building blocks into a brain-scale computer like a true ecosystem? It should be possible to build a silicon version of the human cerebral cortex with the transistor technology using Moore’s law as explained in my previous blog. What’s more, the resulting machine with Nano-Technology would take up less than a cubic meter of space and consume less than 100 watts, not too far from the human brain. It makes the idea very practical indeed. This coupled with the latest chipset by Intel at 49 Qubit (QC) allowing to store entire atoms in the universe in one chipset (A feat it was hard to imagine a century ago.
IBM and Intel are pushing hard to advance the technology to even over 1000 Qubits which can generate sophisticated power no one could have imagined & perhaps a basic foundation of IBM Watson in decades to come solving many automated cloud computing and robotics as we move faster toward a fully digitized economy with near fully automated manufacturing. The whole idea of QC was not a hreally some centuries ago but now it is well within reach. With 1000 Qubit, one can even compute so much massive computing given it covered every atom in the entire universe and perhaps a logical way to finally understand many astrophysics forces, yet to be identified by many physicists across the globe & use Hydrogen Collider at CERN in Switzerland to advance space knowledge for future generations. In addition, even with so much advanced improvements towards QC, a major business used case models need to be developed and at the same time, it will help from K12 through graduate schools across the globe to start teaching our talented students areas of contribution to make a difference in the 21st century.
That is not to say creating such a computer in the brain would be easy. The overall design, I would envision to require $100-200B (Or Higher) to design, engineer & build an end to end ecosystem, including some significant packaging innovations to make it compact to be fit within Nano-section of each brain. There is also the question of how we would program & train the ultimate Quantum Computer. Neuromorphic researchers are still struggling to understand how to make thousands of artificial neurons work together and how to translate brain-like activity into useful engineering applications. Still, the fact that we can envision such a system means that we may not be far off from smaller-scale chips that could be used in portable and wearable electronics. The newly proposed gadgets demand low power consumption, and so a highly energy-efficient neuromorphic chip, yet to be developed. Even if it takes on only a subset of computational tasks, such as signal processing—could be revolutionary. Existing capabilities, like speech recognition, could be extended to handle noisy environments. We could even imagine future smartphones conducting real-time language translation between you and the person you’re talking to via the Hybrid Cloud architecture published last week on my LinkedIn blog list. Think of it this way: In the 40 years since the first signal-processing integrated circuits, Moore’s Law has improved energy efficiency by roughly a factor of at least 1000 if not more.
The most brain-like neuromorphic chips could dwarf such improvements, potentially driving down power consumption by another factor of 100 million. That would bring computations that would otherwise need a data center to the palm of your hand or via new quantum computing, still a few years away from using Pete Shor algorithm at MIT, though Intel just announced 49 Qubit chipset and over time, we will for sure exceed 1000 Qubits by 2022 time-frame. The ultimate brain-like machine will be one in which we build analogues for all the essential functional components of the brain: the synapses, which connect neurons and allow them to receive and respond to signals; the dendrites, which combine and perform local computations on those incoming signals; and the core, or some, region of each neuron, which integrates inputs from the dendrites and transmits its output on the axon.
Simple versions of all these basic components have already been implemented in silicon. The starting point for such work is the same metal-oxide-semiconductor-field-effect transistor (MOSFET), that is used by the billions to build the logic circuitry in modern digital processors. These devices have a lot in common with neurons. Neurons operate using voltage-controlled barriers, and their electrical and chemical activity depends primarily on channels in which ions move between the interior and exterior of the cell—a smooth, analog process that involves a steady build up or decline instead of a simple on-off operation. b are also voltage controlled and operate by the movement of individual units of charge. And when MOSFETs are operated in the “sub-threshold” model, below the voltage threshold used to digitally switch between on and off, the amount of current flowing through the device is very small—less than a thousandth of what is seen in the typical switching of digital logic gates.
The notion that sub-threshold transistor physics could be used to build brain-like circuitry started by many advanced universities starting in the 70s. The technique helped & revolutionized the field of the very-large-scale circuit design in the 70s. There is massive and interesting behavior chip engineers fail to take advantage of such complex chipset and a number of transistors exceeding multi-billion. The brain-like process essentially involves “taking all the beautiful physics that is built into … transistors, mashing it down to a 1 or 0, and then painfully building it back up with the original “AND OR” gates to reinvent the multiple. A more physics-based computer could execute more computations per unit energy than its digital counterpart. Such computer in theoretical physics should take up significantly less space driving less power as well.
Neuromorphic engineers have made all the basic building blocks of the brain out of silicon with a great deal of biological fidelity. The neuron’s dendrite, axon, and some components can all be fabricated from standard transistors and other circuit elements. The design of such circuit with similar power levels and energy consumption to those in the squid’s brain. As a use-case, if we had used analog circuits to model the equations neuroscientists have developed to describe that behavior, we’d need on the order of 10 times as many transistors. Performing those calculations with a digital computer would require even more space. Emulation of such technology is a little trickier. A device that behaves like a synapse must have the ability to remember what state it is in, respond in a particular way to an incoming signal, and adapt its response over time, similar to the way stochastic processing was designed in the statistical model.
In neural AI/ML based-networks, each node in the network has a weight associated with it, and those weights determine how data from different nodes are combined. The first device that could hold a variety of different weights and be reprogrammed on the fly is indeed the right solution. The device is also nonvolatile, which means that it remembers its state even when not in use — a capability that significantly reduces how much energy it needs. The new type of floating-gate transistor, a device that can be used to build memory cells in today’s SSD flash memory. In an ordinary MOSFET, a gate controls the flow of electricity through a current-carrying channel. A floating-gate transistor has a second gate that sits between this electrical gate and the channel. This floating gate is not directly connected to ground or any other component. Thanks to that electrical isolation, which is enhanced by high-quality silicon-insulator interfaces, charges remain in the floating gate for a long time. The floating gate can take on many different amounts of charge and so have many different levels of electrical response, an essential requisite for creating an artificial synapse capable of varying its response to stimuli.
Recent research studies show that the neuromorphic engineering field has seen a surge of research into artificial synapses built from nano-devices such as memristors, resistive RAM, and phase-change memories (as well as floating-gate devices). But it will be hard for these new artificial synapses to improve on our two-decade-old floating-gate arrays. Memristors and other novel memories come with programming challenges; some have device architectures that make it difficult to target a single specific device in a crossbar array. Others need a dedicated transistor in order to be programmed, adding significantly to their footprint. Because floating-gate memory is programmable over a wide range of values, it can be more easily fine-tuned to compensate for manufacturing variation from device to device than can many nano-devices. A number of neuromorphic research groups that tried integrating nano-devices into their designs have recently come around to using floating-gate devices.
The main question is how all these brain-like components work together? In the human brain, of course, neurons and synapses are intermingled. Neuromorphic chip designers must take a more integrated approach as well, with all neural components on the same chip, tightly mixed together. This is not the case in many neuromorphic labs today: To make research projects more manageable, different building blocks may be placed in different areas. Synapses, for example, may be relegated to an off-chip array. Connections may be routed through another chip called a field-programmable gate array, or FPGA. But as we scale up neuromorphic systems, we’ll need to take care that we don’t replicate the arrangement in today’s computers, which lose a significant amount of energy driving bits back and forth between logic, memory, and storage. Today, a computer can easily consume 10 times the energy to move the data needed for a multiply-accumulate operation—a common signal-processing computation—as it does to perform the calculation.
The cognitive computing brain, by contrast, minimizes the energy cost of communication by keeping operations highly local. The memory elements of the brain, such as synaptic strengths, are mixed in with the neural components that integrate signals. And the brain’s “wires”— the dendrites and axons that extend from neurons to transmit, respectively, incoming signals and outgoing pulses—are generally fairly short relative to the size of the brain, so they don’t require large amounts of energy to sustain a signal. From anatomical data, we know that more than 90 percent of neurons connect with only their nearest 1,000 or so neighbors.
Today, we are just beginning to discover these physical algorithms — that is, the processes that will allow brain-like chips to operate with more brain-like efficiency. The physical algorithm exhibited via the desired chipset requires thousand-fold improvement in energy efficiency over predicted analog signal processing. Eventually, by lowering the amount of voltage supplied to the chips and using smaller transistors, chip designers should be able to build chips that parallel the brain in efficiency for a range of computations. Today, these applications rely only slightly on what we know about how the brain actually works. The next 50 years will undoubtedly see the incorporation of more such knowledge. We already have much of the basic hardware we need to accomplish this neurosciences-to-computing translation driven by new Blockchain based technology.
In addition, we must also develop a better understanding of how the final hardware design should behave — and what computational schemes will yield the greatest real-world benefits. We have come pretty far with a very loose model of how the brain works. But neurosciences could lead to far more sophisticated brain-like computers. And what greater feat could there be than using our own brains to learn how to build new ones? This is an idea to reckon with over the next few decades and not far from reach in my humble opinion. Of course, the issue of privacy, reliability and high integrity must also be the center of overall design for this new world of Nano-Based Brain Computing.
Of course the entire idea of cloud based cognitive brain computing & reality coupled with Blockchain for security will end up to be an essence of an idea I published a year ago called “Doctors Inside Body) given so many sensors are now being developed using nano-based technologies which will finally be deployed with final design of multi-stage clustering technique for billions of brain neurons. The impact of such massive innovation will mainly rely not only on AI but also ML for entire cognitive AI/ML-based Brain computing allowing a very sophisticated track of Electrical signals throughout the body down to each cell level which makes me very optimistic we can solve many of the elusive problems of 20th and beginning of 21st century all across the different continents and help individuals with far better Quality of Service and Life.