Artificial Intelligence (AI) has been an industry buzzword for years. It has garnered headlines as a potential force of danger with robots overtaking the world, but most often, AI is viewed as a powerful new technology that can automate processes and make our workplaces - and our world - more efficient, productive and intelligent. AI has the potential to transform nearly every industry, enabling complex tasks like computer vision, image recognition, machine learning, natural language processing and more.
Although AI has been talked about for years, we are, without a doubt, currently embarking on a new era of artificial intelligence, thanks to advancements in computing, deep learning and in-memory computing. When deep learning, which is a subset of AI and machine learning, is coupled with AI, it has the potential to positively impact and transform nearly every industry, from healthcare, to automotive, manufacturing, industrial inspections and retail, amongst others. By enabling workers to collect data, run analytics and perform predictive maintenance, humans and machines can close the gap between automation and augmentation.
Samsung Semiconductor is accelerating this digital transformation to upgrade recognition, processing and analysis across different data forms. However, in order to make this future of AI computing a reality, there are numerous technical challenges that need to be solved to address bandwidth-intensive AI workloads. From processing power, to in-memory computing and performance enhancements, the requirements of AI are significant. As we develop technology that can handle the influx of complex, data-intensive processes that AI promises, including image recognition and natural language processing, we have to take a close look at the requirements and improvements needed across in-memory computing and parallel processing.
The amount of data we generate, consume and analyze has grown exponentially over the years, and it shows no signs of slowing down. According to an IDC report from 2017, annual data creation is forecasted to reach 180 zettabytes in 2025 – which translates to an astounding 180 trillion gigabytes!
The rapid, accumulative growth of data is happening everywhere, from our personal devices to industrial-size facilities. As the amount of data increases at astronomical rates, the industry needs to identify new, innovative ways to analyze the ever-increasing flow of information, which is where AI comes into play.
AI is the herald of the new world of big data. Through gathering, analyzing and leveraging valuable data insights, AI can enable the automation of processes that will improve business outcomes. In order for this to become a reality, innovation within processing and memory becomes of utmost importance in order to meet high-bandwidth AI requirements.
Deep learning has been around since the 1980s, but after decades of stagnation, it finally reached a breakthrough in the early 2000’s when engineers combined distributed computing with neural network research. This has helped fuel the next era of AI innovation through deep learning and parallel processing.
Computing power has followed a gradual, but consistent, improving path of performance enhancements and faster clock speeds over the past several decades. However, the industry is reaching its limit when it comes to material physics, and we are maxing out when it comes to improving performance based on higher clock speeds. The solution to increased performance for processing power lies in parallel processing – that is, multi-core architectures that split a task into parts which are executed simultaneously on separate processors in the same system.
With Samsung’s memory-based parallel processing, deep learning has grown faster, enabling a wide-reaching set of applications, from intelligent personal assistants, to smart speakers, to language translation and AI photo filters. With its high-performance memory technology, Samsung is opening up the next chapter in AI innovation.
The role of advanced memory and processing power are two critical elements when it comes to faster, more accurate processing of AI technology. At the International Conference on Computer Vision (ICCV) in 2017, a study was presented which revealed that deep learning performance can be significantly enhanced simply by increasing the sheer memory size of a system.
Samsung realized the need for enhanced memory capabilities and addressed this need with high-bandwidth memory (HBM) solutions that can address the rapidly evolving needs of deep learning and AI. As the industry leader in advanced memory solutions, Samsung has introduced several key products to advance AI, including high-performance processors for deep learning and HBM2, the first next-generation memory product currently in mass production. Given its significant R&D efforts and global scale, Samsung is able to provide HBM2 memory in volume while meeting the ever-evolving and ever-increasing requirements for processing power and speed driven by AI and deep learning.
Over the last ten years, the speed of memory and networks have increased 20- to 100-fold, but the server side has fallen behind due to the low-level input/output (I/O) of its disk technology. With the massive influx of data we are currently experiencing – and are forecasted to keep experiencing – this creates a significant bottleneck in the system.
As we roll out bandwidth-intensive AI and deep learning technologies, the solution to removing the data bottleneck in deep learning stacks is in-memory computing. In-memory technology is driving changes in the server market, as it greatly increases the speed of data indexing and transactions. Samsung has introduced a series of high-capacity and high-performance DRAMs, including 3DS DRAM modules, GDDR6, HBM2 and SSD servers, in order to bring greater innovation to AI and continue to transform businesses worldwide.
We are currently embarking on the next era of AI, with significant advancements across in-memory computing and parallel processing finally matching the needs for advanced deep learning. Samsung Semiconductor is leading this digital transformation by providing high-bandwidth memory interfaces and DRAM solutions for the server side that are high-capacity and high-performance. These enhanced solutions enable advanced data processing and analysis, accelerating the future of deep learning, computer vision, natural language processing and other applications that have the power to transform the world we live in, providing usable machine intelligence around the globe.
At this year's MWC in Barcelona, Samsung Semiconductor exhibited its latest solutions and products for mobile applications
Date: 2019-11-22Reference server design guidelines for NGSFF SSD from Samsung describing Mission Peak architecture and other details.
Date: 2019-11-22Watch the video to check out Samsung NF1 SSD, small form factor that has high capacity and compatibility.
Date: 2019-11-22Greater memory will be a critical part of the design and implementation of 5G networks. To address the advanced memory needs of 5G networks, Samsung offers a growing array of products.
Date: 2019-11-22Watch the video below to check out Samsung Z-SSD, an SSD that boasts a new level of performance.
Date: 2019-11-22Samsung Z-SSD SZ985, a new type of Ultra-low Latency SSD for Enterprise and Data Centers
Date: 2019-11-22Samsung announced today that 46 of its new product innovations have been recognized as CES® 2020 Innovation Awards winners, including three Best of Innovations accolades.
Date: 2019-11-22Samsung’s new 256Gb V-NAND features industry’s fastest data transfer speed, while being the first to apply the ‘Toggle DDR 4.0’ NAND interface
Date: 2019-11-22