Storage for High-performance Computing
By: Storage Magazine View more from Storage Magazine >>
Download this next:
The fast-changing, symbiotic evolution of HPC and AI
By: Dell Technologies and Intel
Type: Research Content
Enterprises are at a crossroads—a juncture where High Performance Computing (HPC) and Artificial Intelligence (AI) intersect. This convergence isn’t mere happenstance; it’s a strategic imperative. Here’s why:
Gen AI Unleashed: Generative AI (gen AI) holds the key to operational efficiencies, faster decisions, and growth. HPC’s computational muscle accelerates gen AI training, while AI optimizes HPC workflows.
Speed and Insight: HPC turbocharges AI model training, while AI fine-tunes HPC simulations.
Competitive Edge: In a climate-aware era, sustainability matters. IT leaders, armed with HPC-AI fusion, shape decisions that impact future generations.
Learn more about Dell Technologies solutions powered by Intel® Xeon® Processors
These are also closely related to: "Storage for High-performance Computing"
-
Case study: Using pre-fab datacentres to meet Norway's growing demand for colocation space
By: TechTarget ComputerWeekly.com
Type: eGuide
Datacentre operator Green Mountain embraces Schneider Electric's pre-fabricated datacentre designs to ensure it is positioned to respond to the growing demand from the hyperscale and HPC communities for colocation capacity in Norway
-
High Performance Storage at Exascale
By:
Type: Talk
As the computational performance of modern supercomputers is approaching the ExaFLOPS level (10**18 floating point operations per second), the demands on the storage systems that feed these supercomputers continue to increase. On top of traditional HPC simulation workloads, more and more supercomputers are also designed to simultaneously satisfy new data centric workflows with very different I/O characteristics. Through their massive scale-out capabilities, current parallel file systems are more than able to provide the capacity and raw bandwidth that is needed at Exascale. But it can be difficult to achieve high performance for the increasingly complex data access patterns, to manage billions of small files efficiently, and to continuously feed large GPU based systems with the huge datasets that are required in ML/AI workloads. This webcast will examine the different I/O workflows seen on supercomputers today, discussing the approaches the industry is taking to support the convergence of HPC and AI workflows, and highlighting some of the innovations in both storage hardware and parallel file system software that will enable high performance storage at Exascale and beyond, discussing: • Overview of typical use cases: Numerical simulation, sensor data ingest and analysis, ML/AI, etc. • Advancements in HPC storage hardware: From HDDs to storage class memory • Solution design: HPC storage fabrics, software stacks, heterogeneity, and tiering • Workflows: How to ensure data is available in the right place at the right moment • Realities of high-performance storage management – from the perspective of end users and storage administrators
Find more content like what you just read:
-
High Performance Storage at Exascale
By:
Type: Talk
As the computational performance of modern supercomputers is approaching the ExaFLOPS level (10**18 floating point operations per second), the demands on the storage systems that feed these supercomputers continue to increase. On top of traditional HPC simulation workloads, more and more supercomputers are also designed to simultaneously satisfy new data centric workflows with very different I/O characteristics. Through their massive scale-out capabilities, current parallel file systems are more than able to provide the capacity and raw bandwidth that is needed at Exascale. But it can be difficult to achieve high performance for the increasingly complex data access patterns, to manage billions of small files efficiently, and to continuously feed large GPU based systems with the huge datasets that are required in ML/AI workloads. This webcast will examine the different I/O workflows seen on supercomputers today, discussing the approaches the industry is taking to support the convergence of HPC and AI workflows, and highlighting some of the innovations in both storage hardware and parallel file system software that will enable high performance storage at Exascale and beyond, discussing: • Overview of typical use cases: Numerical simulation, sensor data ingest and analysis, ML/AI, etc. • Advancements in HPC storage hardware: From HDDs to storage class memory • Solution design: HPC storage fabrics, software stacks, heterogeneity, and tiering • Workflows: How to ensure data is available in the right place at the right moment • Realities of high-performance storage management – from the perspective of end users and storage administrators
-
High Performance Storage at Exascale
By:
Type: Talk
As the computational performance of modern supercomputers is approaching the ExaFLOPS level (10**18 floating point operations per second), the demands on the storage systems that feed these supercomputers continue to increase. On top of traditional HPC simulation workloads, more and more supercomputers are also designed to simultaneously satisfy new data centric workflows with very different I/O characteristics. Through their massive scale-out capabilities, current parallel file systems are more than able to provide the capacity and raw bandwidth that is needed at Exascale. But it can be difficult to achieve high performance for the increasingly complex data access patterns, to manage billions of small files efficiently, and to continuously feed large GPU based systems with the huge datasets that are required in ML/AI workloads. This webcast will examine the different I/O workflows seen on supercomputers today, discussing the approaches the industry is taking to support the convergence of HPC and AI workflows, and highlighting some of the innovations in both storage hardware and parallel file system software that will enable high performance storage at Exascale and beyond, discussing: • Overview of typical use cases: Numerical simulation, sensor data ingest and analysis, ML/AI, etc. • Advancements in HPC storage hardware: From HDDs to storage class memory • Solution design: HPC storage fabrics, software stacks, heterogeneity, and tiering • Workflows: How to ensure data is available in the right place at the right moment • Realities of high-performance storage management – from the perspective of end users and storage administrators
-
High Performance Storage at Exascale
By:
Type: Talk
As the computational performance of modern supercomputers is approaching the ExaFLOPS level (10**18 floating point operations per second), the demands on the storage systems that feed these supercomputers continue to increase. On top of traditional HPC simulation workloads, more and more supercomputers are also designed to simultaneously satisfy new data centric workflows with very different I/O characteristics. Through their massive scale-out capabilities, current parallel file systems are more than able to provide the capacity and raw bandwidth that is needed at Exascale. But it can be difficult to achieve high performance for the increasingly complex data access patterns, to manage billions of small files efficiently, and to continuously feed large GPU based systems with the huge datasets that are required in ML/AI workloads. This webcast will examine the different I/O workflows seen on supercomputers today, discussing the approaches the industry is taking to support the convergence of HPC and AI workflows, and highlighting some of the innovations in both storage hardware and parallel file system software that will enable high performance storage at Exascale and beyond, discussing: • Overview of typical use cases: Numerical simulation, sensor data ingest and analysis, ML/AI, etc. • Advancements in HPC storage hardware: From HDDs to storage class memory • Solution design: HPC storage fabrics, software stacks, heterogeneity, and tiering • Workflows: How to ensure data is available in the right place at the right moment • Realities of high-performance storage management – from the perspective of end users and storage administrators
-
High Performance Storage at Exascale
By:
Type: Talk
As the computational performance of modern supercomputers is approaching the ExaFLOPS level (10**18 floating point operations per second), the demands on the storage systems that feed these supercomputers continue to increase. On top of traditional HPC simulation workloads, more and more supercomputers are also designed to simultaneously satisfy new data centric workflows with very different I/O characteristics. Through their massive scale-out capabilities, current parallel file systems are more than able to provide the capacity and raw bandwidth that is needed at Exascale. But it can be difficult to achieve high performance for the increasingly complex data access patterns, to manage billions of small files efficiently, and to continuously feed large GPU based systems with the huge datasets that are required in ML/AI workloads. This webcast will examine the different I/O workflows seen on supercomputers today, discussing the approaches the industry is taking to support the convergence of HPC and AI workflows, and highlighting some of the innovations in both storage hardware and parallel file system software that will enable high performance storage at Exascale and beyond, discussing: • Overview of typical use cases: Numerical simulation, sensor data ingest and analysis, ML/AI, etc. • Advancements in HPC storage hardware: From HDDs to storage class memory • Solution design: HPC storage fabrics, software stacks, heterogeneity, and tiering • Workflows: How to ensure data is available in the right place at the right moment • Realities of high-performance storage management – from the perspective of end users and storage administrators
-
Heterogeneous Design Approaches to the Convergence of HPC and AI
By:
Type: Replay
The convergence of AI and HPC is leading to the need to handle more diverse workloads in High Performance systems. This increase in diversity of demanding workloads coupled with the expanding landscape of available accelerators can be a challenge for designing application optimized HPC systems. We will explore the benefits of designing HPC systems with heterogeneous accelerator technologies to make the marriage of AI and HPC faster and more efficient. We will take a practical look at what can be achieved today and emerging technologies that will drive the HPC designs of the future. Watch this presentation to learn about the following topic(s): • Machine learning and HPC: Each worthy of its own investment, or better together? (Alternately: Marriage for the ages, or heading for divorce?) • Specialization vs. standardization: How do you maximize specific application performance without creating silos?
-
Supercharge Your HPC and AI Workflow in Federal
By:
Type: Video
Artificial intelligence (AI) is becoming widely used in the Federal space for facial recognition, real-time translation, image detection, and voice recognition. As organizations go into production with AI, the amount of data sets being analyzed has exploded. Traditional storage appliances can’t keep up, so AI compute is now resembling high-performance computing (HPC). Chesapeake Systems (CHESA), in partnership with Hewlett Packard Enterprise (HPE), WekaIO, Scality, and NVIDIA, provides customers with software-defined storage solutions tailored to HPC and AI workloads that achieve multi-petabyte scalability and high-performance, while simultaneously controlling costs. Join these five industry leaders for a panel discussion on how your agency can dramatically lower your total cost of ownership of your HPC environments while increasing performance and scalability in your AI workflows.
-
AMD EPYC™: The Optimal Solution for High Performance Computing
By:
Type: Video
The field of High Performance Computing (HPC) has seen incredible advancements over the last few years – driving the demand for powerful and efficient computing solutions. During this time, AMD EPYC processors have emerged as the leading choice for HPC, offering the performance and features that cater to the ever-growing computational demands of modern scientific and industrial applications. Come learn how AMD is working with the leading HPC users to solve scientific computing challenges and get the latest information on AMD’s EPYC™ solutions for HPC.
-
AMD EPYC™: The Optimal Solution for High Performance Computing
By:
Type: Video
The field of High Performance Computing (HPC) has seen incredible advancements over the last few years – driving the demand for powerful and efficient computing solutions. During this time, AMD EPYC processors have emerged as the leading choice for HPC, offering the performance and features that cater to the ever-growing computational demands of modern scientific and industrial applications. Come learn how AMD is working with the leading HPC users to solve scientific computing challenges and get the latest information on AMD’s EPYC™ solutions for HPC.
-
CW ASEAN: Preparing for 5G
By: TechTarget ComputerWeekly.com
Type: Ezine
As telcos gear up to roll out the first 5G networks, enterprises are keen to see how they can take advantage of faster 5G networks to support a broad range of applications. In this edition of CW ASEAN, we look at how enterprises in ASEAN are readying themselves for 5G to take advantage of the new technology. Read the issue now.
-
Energy Companies on the Move: From CentOS to Ubuntu
By:
Type: Talk
Join us to learn about what energy companies are doing and participate in the live Round Table discussion and Q&A. Register today! As the CentOS Project announced its plans to end support for CentOS, the HPC community is left to decide what platform to migrate to. For many, the decision comes down to cost and support. Canonical's Ubuntu is a popular choice for those migrating from CentOS, as it offers a modern, stable platform with regular security updates. Ubuntu is also easy to use and has a large online community where users can find support and guidance. Canonical has years of experience in providing high-performance computing solutions and has been working closely with the HPC community to ensure a smooth transition for those migrating from CentOS. Join us for this round table discussion with experts and industry leaders in HPC. In this webinar we will discuss: How companies are transitioning of CentOS The new comer Rocky The different challenges in HPC and Cloud Computing HPC and AI/ML harmony Industry software and libraries driving decisions Speakers: Jon Thor Kristinsson - HPC and Product Manager, Canonical Craig Bender - Field Engineer, Canonical Dave Montana - VP of Sales, Canonical James Beedy, CEO, Omnivector Solutions John Fuqua, Energy Markets, Canonical (Moderator) Rose Vettraino, Account Executive, Canonical (Moderator)
-
Where are we with ESG? | Laura Starks | FTSE Russell Convenes
By:
Type: Video
In the latest episode of FTSE Russell Convenes, Laura Starks, Professor of Finance at the McCombs School of Business at The University of Texas at Austin, discusses the trade-offs we will have to make to reduce our reliance on oil and gas, which factors impact investing behaviours and how ESG investing has evolved since she started teaching the subject in 2011.
-
Explore New Possibilities with the Convergence of HPC and AI/ML
By:
Type: Talk
Build new revenue streams with emerging AI/Deep Learning use cases and also lower TCO with convergence of architectures for emerging AI/ML and traditional HPC. Business decision makers, data scientists and researchers are developing new approaches for solving problems AI and deep learning, requiring HPC scale compute resources. AI and data analytics workloads benefit from HPC infrastructure that can scale up to improve performance. However they require different techniques to solve the complex problems. Dell will share a unified architecture that supports both types of workloads thereby reducing overall TCO.
-
Future of Sustainable Data Centers
By: Penguin Solutions
Type: White Paper
This white paper explores how Penguin Solutions, NVIDIA, and Shell are reimagining sustainable data centers through advanced cooling technologies and GPU-powered infrastructure. Inside, you'll learn how these solutions can boost performance while reducing energy costs and emissions. Read on now to discover the future of sustainable computing.
-
Sneak Peek into (ISC)² Security Congress 2017
By:
Type: Video
Attend our 7th annual conference in Austin, Texas on September 25-27. As cyber threats and attacks continue to rise, (ISC)² Security Congress provides the knowledge, tools, direction and expertise that cybersecurity professionals need. Learn more about (ISC)² Security Congress: https://congress.isc2.org
-
It's Time To Go International, Where Cooling Isn't A Problem... [Podcast]
By:
Type: Video
This week on the podcast, we’re going International. We’re joined by Rui Gomes from atNorth. atNorth, headquartered in Iceland, has recently deployed WEKA for its HPC and AI workloads. Rui explains his WEKA journey and how atNorth using WEKA, can offer their customers HPC-as-a-service with the best performance and price point.
-
The Hidden Infrastructure That Powers Genomic Sequencing
By:
Type: Video
Genomics can now use its groundbreaking insights into complex diseases and employ them into patient treatments. But every diagnosis means the computing equivalent of assembling a 3-billion-piece jigsaw puzzle. It's a job that until recently would take weeks, and now can be done in a few hours. For two decades TGen, a nonprofit medical research institute, has been using high-performance computing to conduct groundbreaking research that is already having life-changing results. But the next generation of AI-driven genomic medicine will require exponentially more data, and the speed with which it can be analysed will literally be a matter of life or death for patients. James Lowery, CIO of TGen, George Vacek of NVIDIA, and Ken Berta of Dell tell The Reg's Tim Phillips how they are building the hidden infrastructure (with Clara Parabricks, accelerated compute, and scale-out PowerScale storage) for TGen's HPC applications. You will hear: • The scale of TGen's challenge today, and in the future • Designing and implementing high-performance storage for TGen's HPC solution • Meeting the performance challenge
-
The Hidden Infrastructure That Powers Genomic Sequencing
By:
Type: Video
Genomics can now use its groundbreaking insights into complex diseases and employ them into patient treatments. But every diagnosis means the computing equivalent of assembling a 3-billion-piece jigsaw puzzle. It's a job that until recently would take weeks, and now can be done in a few hours. For two decades TGen, a nonprofit medical research institute, has been using high-performance computing to conduct groundbreaking research that is already having life-changing results. But the next generation of AI-driven genomic medicine will require exponentially more data, and the speed with which it can be analysed will literally be a matter of life or death for patients. James Lowery, CIO of TGen, George Vacek of NVIDIA, and Ken Berta of Dell tell The Reg's Tim Phillips how they are building the hidden infrastructure (with Clara Parabricks, accelerated compute, and scale-out PowerScale storage) for TGen's HPC applications. You will hear: • The scale of TGen's challenge today, and in the future • Designing and implementing high-performance storage for TGen's HPC solution • Meeting the performance challenge
-
The Hidden Infrastructure That Powers Genomic Sequencing
By:
Type: Video
Genomics can now use its groundbreaking insights into complex diseases and employ them into patient treatments. But every diagnosis means the computing equivalent of assembling a 3-billion-piece jigsaw puzzle. It's a job that until recently would take weeks, and now can be done in a few hours. For two decades TGen, a nonprofit medical research institute, has been using high-performance computing to conduct groundbreaking research that is already having life-changing results. But the next generation of AI-driven genomic medicine will require exponentially more data, and the speed with which it can be analysed will literally be a matter of life or death for patients. James Lowery, CIO of TGen, George Vacek of NVIDIA, and Ken Berta of Dell tell The Reg's Tim Phillips how they are building the hidden infrastructure (with Clara Parabricks, accelerated compute, and scale-out PowerScale storage) for TGen's HPC applications. You will hear: • The scale of TGen's challenge today, and in the future • Designing and implementing high-performance storage for TGen's HPC solution • Meeting the performance challenge
-
Expanded AI Solution to Accelerate AI Factory Deployments
By: Penguin Solutions
Type: Analyst Report
Penguin Solutions expanded its OriginAI solution to accelerate AI deployments with predictable performance, cost-efficiency, and expert services. The solution aims to address enterprise challenges around scalability, compliance, and technical expertise for AI workloads. Read this full analyst report to learn more.
-
Orchestrating Data Storage to Enable New Performance & Cost/Performance Profiles
By:
Type: Replay
Along with the growth in unstructured data, the workloads being applied to that data are becoming less structured. This is because clouds have (re)taught us what HPC sites already knew, that compute should be shared for highest utilization. Shared compute requires shared storage and results in many storage workloads being applied at the same time. The explosion of AI/ML brings an entirely new workload that needs to be supported without buying a new storage system. No one wants to buy special purpose storage. Instead, storage should serve all your needs without having to sync data across different systems for different uses.
-
Redefine data visualization and insights with AI
By: Dell Technologies
Type: Product Overview
In this product overview, you'll learn how to accelerate insights and innovation with Dell PowerEdge servers and accelerator portfolios. Read on now to learn how you can leverage the latest technologies like generative AI, large language models, and digital twins to outpace the competition.
-
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
By:
Type: Talk
The new Universal GPU server allows customers to choose the best CPUs, GPUs, and switch configurations for their applications and workloads, including dual-processor configurations using either the 3rd Gen Intel Xeon Scalable processor or the 3rd Gen AMD EPYC™ processor. In this webinar, members of the Server Solution Team as well as a member of Supermicro’s Product Office will discuss Supermicro’s Universal GPU Server, the server’s modular, standards-based design, the important role of OCP Accelerator Module (OAM) form factor, and Universal Baseboard (UBB) in the system, as well as touching on AMD's next generation HPC accelerator. In addition, we will get some insights into trends in the HPC and AI/Machine Learning space, including the different software platforms and best practices that are driving innovation in our industry and daily lives. In particular: • Tools to enable use of the high performance hardware for HPC and Deep Learning applications • Tools to enable use of multiple GPUs, including RDMA, to solve highly demanding HPC and deep learning models, such as BERT • Running applications in containers with AMD’s next generation GPU system
-
Mastering Multicloud
By:
Type: Talk
Erik Lönroth has been technical manager for HPC data centres for about a decade now and will share his knowledge within the field with you here. During the session, he will go through the various components of an HPC cluster and how he utilises Juju and SLURM to achieve a flexible setup in multiple clouds. He will also be available for an open discussion around the rationale and values that comes out of his approach. Join the Ubuntu Masters telegram channel to speak with Ubuntu product managers, engineers and other attendees: https://t.me/joinchat/JOsc1hzTAhbAfjBX1fsqLA
-
HPC NOW! #1 of an interactive podcast series.
By:
Type: Talk
This is a episodic webcast with our Dell Technologies host, Tony Rea, that introduces HPC-related topics, tech information and industry experts.
-
How Hardware Enhancements Drive Software-Defined Storage Improvements
By:
Type: Talk
Discussion of the enabling technologies that customers running software-defined storage (SDS) are most interested in (Performance) • New storage technology PMEM, NVMe, Dual Actuator HDDs, NVMeOF, Composable, Computational storage,/Smart media, and new offload strategies like DPU • New performance milestones • Challenges delivering value (rapid ROI) Speakers: Eric Burgener, Research Vice President, IDC Analyst Liran Zvibel, CEO and Founder, Wekaio Randy Kreiser, Storage Specialist/FAE, Supermicro Andrey Kudryavtesev, HPC Storage Architect, Intel. Colin Presly, Senior Director, Office of the CTO, Seagate
-
Optimization Masterclass: improving Python-based workloads in HPC
By:
Type: Video
With the increasing use of Python-based code in HPC, users can often underestimate what impact Python is having on application performance and how through using modules and libraries, inefficiencies are alleviated. In this masterclass, Solution Architect for Arm, Florent Lebeau, presents new capability in Arm Forge for profiling Python to fully optimize HPC workloads. • Latest diagnostic techniques for determining problematic Python-based codes. • Practical tips for targeting rewrite and optimization work • Effective use of libraries and modules for dealing with inefficient code sections.
-
Four Stages for Government Security Teams to Manage Risk
By:
Type: Replay
Moderated by CSO, this webinar features a fireside chat with Cam Beasley, CISO of the University of Texas at Austin and Jae Lee, Product Marketing Director, Security Markets at Splunk, who will examine the security challenges facing highly regulated organizations today, and explain how to leverage data from a broad range of sources to better understand: - A four-stage security maturity curve to determine your current status - Tools and guidance to make progress along that curve and improve your security posture - Benefits attained by forward-thinking enterprises and government agencies who are leveraging this methodology Featuring: Cam Beasley, CISO, UT Austin Jae Lee, Director, Product Marketing, Splunk Inc.
-
Ask an Event Manager: Backstage at SXSW
By:
Type: Talk
South by Southwest® (SXSW®) is a global celebration of the convergence of tech, film, music, education, and culture. Held every year in Austin, Texas, SXSW offers speakers a unique platform and a global audience. Interested in speaking there? We’ve invited one of the senior managers and conference organizers to give us a look behind the scenes at SXSW. You will learn: - The background on SXSW - SXSW application processes - Common mistakes found in speaker applications - What speakers should know when applying to SXSW.
-
How Crusoe Built an AI Cloud Service for Scale
By:
Type: Talk
Insights from up-leveling a compute cloud for higher performance—and higher margins With today’s AI boom, more enterprises are turning to cloud-based services to capitalize on AI without having to make a large capital investment. To serve their needs, service providers such as Crusoe have launched specialty clouds that offer enterprises fast, economical, easy to use high-performance compute services. Featured recently on 60 Minutes, Crusoe recovers stranded energy to power their cloud service, enabling customers to run AI/ML and HPC workloads at petabyte scale. Through careful curation of their tech stack they built a service that delivers the same efficiencies as a hyperscale provider, but managed by their lean-and-mean DevOps team. Learn from three leaders in cloud infrastructure: - How to engineer a platform for dynamic, fast resource provisioning and allocation - How to architect a system that reduces administrative overhead and improves margins - How a software-defined storage forms the foundation for a flexible, resilient tech stack - All about modern systems that deliver the speed and scale required for AI/ML and HPC
-
Considerations When Building a Massive Data Storage Solution
By:
Type: Video
Emerging HPC, AI, and analytics applications across many disciplines are driving new requirements for easy online accessibility to massive data sets. Join Quantum experts to discuss: • Challenges and tradeoffs that organizations classically face in building at hyperscale • How cloud architectures should be influencing your own in-house designs • New technologies that are enabling a new generation of storage clouds • Design principles that simultaneously optimize performance, durability, availability, and affordability • How new advances are blurring the line between active and cold data sets
-
Solving I/O Bottleneck with DAOS and PMEM
By:
Type: Replay
Intel® has been building an entirely open-source software ecosystem for data-centric computing, fully optimized for Intel® architecture and non-volatile memory (NVM) technologies, including Intel® Optane™ DC persistent memory and Intel® Optane™ DC SSDs. Distributed Asynchronous Object Storage (DAOS) is the foundation of the Intel Exascale storage stack. DAOS is an open-source software-defined scale-out object store that provides high bandwidth, low latency, and high I/O operations per second (IOPS) storage containers to HPC applications. In addition, it enables next-generation data-centric workflows that combine simulation, data analytics, and AI.
-
Deploying the Ultimate GPU Acceleration Tech Stack to Scale AI, Sciences & HPC
By:
Type: Replay
As the size of AI and HPC datasets continues to grow exponentially, the amount of time spent loading data for your fast GPUs continues to expand due to slow I/O, bottlenecking your GPU-accelerated application performance. In this session, NVIDIA's Rob Davis and Supermicro’s Alok Srivastav discuss the latest technology leap in storage and networking that eliminates this bottleneck to bring your GPU acceleration to the next level. The topics include GPUDirect Storage and RDMA, NVMe-oF, PCIe 4.0, and teach you how to start building an ultimate GPU-accelerated application machine through Supermicro's latest technology innovations.
-
Dell Technologies Virtual Summit - How AMD Supports Healthcare
By:
Type: Video
In this session, hear from AMD’s Raghu Nambiar, Corporate Vice President for Software and Solutions educated the audience on how AMD, VMware, and Dell Technologies are collaborating in several areas, including AI/ML, and HPC environments.
-
Software-Defined Durability, Protection and Availability at Massive Scale
By:
Type: Replay
Not all object storage systems are alike. Designing systems to store terabytes to exabytes of data - for the long term - not only requires scalable performance, but also serious considerations that ensure the durability, protection and availability of both the data and the underlying storage platform. In this session, we will discuss how object storage and scale-out architectures are evolving to meet the future demands of incessant unstructured data growth. Join us to learn how: • ActiveScale’s software architecture uniquely solves the challenge of scale and durability • How rebalancing prevents some systems from meeting your data growth needs • How object storage is a key resource in your fight against ransomware • How new storage architectures are driving innovation in genomics, HPC, and AI Join Frederik De Schrijver, Quantum, and Paul Mcleod, Supermicro, to learn how we are working together to drive new levels of capability.
-
CIO's Perspective on HPC Cloud
By:
Type: Talk
HPC and AI, or shortly HPCAI, become increasingly important for both academics and industry. Many emerging types of research in biomedical, genomics, material science and nanotechnology, environmental, pharmaceutical rely heavily on the power of HPCAI for the understanding of the very large and complex problem. Moreover, many industries such as pharmaceutical, financial and banking, insurance, manufacturing, security relying more and more on advanced in HPCAI. These usually involved the processing of massive data and sometimes the connection with massive IoT devices. Thus, ensuring that the processing and data are secure is very important. The balance between flexibility, ease of use by users, and security must be achieved. This presentation shares the security issues and concern of the CIO who need to handle HPC cloud infrastructure for their organization. Some of the general practices and guidelines will be discussed.
-
Aligning HPC Researchers and Security Teams
By:
Type: Replay
Scientific computing is a field that is becoming increasingly important as an engine for modern science as algorithms, computer models, and simulations become an increasingly common way to investigate hypotheses and perform analysis against large data sets. Moreover, the growing shift of the HPC systems that perform these computations to the cloud or to systems with a Web front end means that these systems now face an increased risk of cyber-attacks making it critical to begin to develop ways to secure these systems. This presentation discusses some recent work of the HPC working group on using tabletop exercises to get the security conversation going with scientific research teams and explores how application security can be used to enhance quality and reliability of scientific software making for not just better security but better science as well.
-
A Faster Time to Science: Speeding Research Pipelines
By:
Type: Video
In Partnership with Intel The need for rapid analytics and AI is the key to mining meaningful insights. Groundbreaking discoveries depend on quick access to clinical and research data. The pace of life sciences research has never been more accelerated – all because of innovation and technology. Discuss how a new wave of technology is addressing the vast explosion of genomics data and turning it into meaningful insight and a faster time to science. Pure, with Intel technology, can speed Genomics pipelines by up to 24x faster. Pure Flashblade is a single, scale-out storage solution for Genomics and other HPC use cases. Pure can be run on-prem or cloud adjacent, reducing or eliminating onerous cloud storage fees.
-
Advancing AI: New Supermicro Servers with AMD's MI300 Series Accelerators
By:
Type: Talk
Join Supermicro and AMD experts for a live webinar to discover how Supermicro's new H13 GPU systems, powered by AMD Instinct™ MI300 series accelerators, can advance your large-scale AI and HPC infrastructure. In this webinar, you will learn about: - The new AMD Instinct MI300 series - Supermicro’s new AMD Instinct MI300 powered systems - Workloads and trends in AI and HPC infrastructure Presenters: - Daniel Chen, Sr. Director, Solution Management, Supermicro - Tom Ling, Product Management, Supermicro - Mark Orthodoxou, Sr. Director, Business Development, Datacenter GPU BU, AMD - Binh Chu, Sr. Product Manager, AMD - Michael Schulman, Sr. Manager, Supermicro (Moderator)
-
Supercharging GPU Cloud for AI Workloads
By:
Type: Video
Get the CTO perspective on revolutionary advancements in GPU cloud infrastructure. Watch our exclusive webinar featuring CTOs from Applied Digital, HighFens, and WEKA as they explore how to meet the demands of tomorrow’s AI workloads. This discussion will explore cutting-edge strategies and technologies designed to enhance GPU cloud efficiency, optimise data pipelines for high-performance computing (HPC), and achieve significant cost savings. Watch the webinar for insights on: Maximising GPU Utilisation: Learn how to revolutionise hardware efficiency and maximise ROI. Overcoming Data Challenges: Discover ways to streamline data pipelines for generative AI and HPC applications. Future-Proofing Infrastructure: Explore how industry leaders are building solutions for the AI workloads of today and tomorrow.
-
Supermicro SuperBlade Powering Modern Workloads
By:
Type: Video
Today's cloud, HPC, and AI workloads require a new level of compute performance. During this webinar, Director of Product Management Shanthi Adloori explores the results and benefits of the Supermicro A+ SuperBlade, featuring 3rd Gen AMD EPYC processors, powering modern workloads.
-
World Record Performance, Supermicro SuperBlade® Powers Cloud & Modern Workloads
By:
Type: Talk
Companies across industries that are embarking on a Digital Transformation journey face a strategic imperative to build faster and more effective delivery platforms to jump-start growth, speed time to market, and foster innovation. With the SuperBlade, you can modernize your IT infrastructure while bringing cloud-like agility and economics to on-premises infrastructure. Supermicro’s SuperBlade®, powered by AMD EPYC processor, sets world-record performance. It provides an efficient architecture that combines performance, density, and advanced networking in one compact build to power your enterprise, cloud, AI/ML and HPC applications. During this webinar we will cover: • SuperBlade features, world record performance and benefits • SuperBlade’s density-optimized, resource-saving architecture • How customers can achieve lower TCO with workload optimized SuperBlade servers • Various Use Cases around Cloud, HPC, VDI, Virtualization, AI/ML and Enterprise Apps
-
How can Azure Quantum Elements accelerate scientific discovery today and in the future?
By:
Type: Video
Transform your chemistry and material science R&D with HPC and AI and accelerate innovation with Azure Quantum Elements. Join us to learn how to accelerate manufacturing and chemical innovation with agility, power scientific breakthroughs, and adapt to today's evolving pressures while pioneering the products of tomorrow that will help you stay ahead.
-
How can Azure Quantum Elements accelerate scientific discovery today and in the future?
By:
Type: Video
Transform your chemistry and material science R&D with HPC and AI and accelerate innovation with Azure Quantum Elements. Join us to learn how to accelerate manufacturing and chemical innovation with agility, power scientific breakthroughs, and adapt to today's evolving pressures while pioneering the products of tomorrow that will help you stay ahead.
-
Unleash EDA Potential with AMD, Azure, and Ansys: Part 2
By:
Type: Video
Azure provides a flexible and scalable EDA environment, with a variety of HPC services and resources to meet the demanding requirements of electronic design workflows. InfiniBand interconnect is one such service. Come learn about a collaborative innovation involving AMD, Azure, and Ansys that focuses on harnessing Azure’s InfiniBand-based storage solution to overcome the limitations of traditional NFS-based compute grid. By combining the power of Azure’s cloud infrastructure with Ansys’ Redhawk-SC, AMD EPYC Processors; chip designers can accelerate their design iterations and achieve optimized results for complex electronic systems.
-
Unleash EDA Potential with AMD, Azure, and Ansys: Part 1
By:
Type: Video
Azure provides a flexible and scalable EDA environment, with a variety of HPC services and resources to meet the demanding requirements of electronic design workflows. InfiniBand interconnect is one such service. Come learn about a collaborative innovation involving AMD, Azure, and Ansys that focuses on harnessing Azure’s InfiniBand-based storage solution to overcome the limitations of traditional NFS-based compute grid. By combining the power of Azure’s cloud infrastructure with Ansys’ Redhawk-SC, AMD EPYC Processors; chip designers can accelerate their design iterations and achieve optimized results for complex electronic systems.
-
Unleash EDA Potential with AMD, Azure, and Ansys: Part 2
By:
Type: Replay
Azure provides a flexible and scalable EDA environment, with a variety of HPC services and resources to meet the demanding requirements of electronic design workflows. InfiniBand interconnect is one such service. Come learn about a collaborative innovation involving AMD, Azure, and Ansys that focuses on harnessing Azure’s InfiniBand-based storage solution to overcome the limitations of traditional NFS-based compute grid. By combining the power of Azure’s cloud infrastructure with Ansys’ Redhawk-SC, AMD EPYC Processors; chip designers can accelerate their design iterations and achieve optimized results for complex electronic systems.
-
Unleash EDA Potential with AMD, Azure, and Ansys: Part 1
By:
Type: Replay
Azure provides a flexible and scalable EDA environment, with a variety of HPC services and resources to meet the demanding requirements of electronic design workflows. InfiniBand interconnect is one such service. Come learn about a collaborative innovation involving AMD, Azure, and Ansys that focuses on harnessing Azure’s InfiniBand-based storage solution to overcome the limitations of traditional NFS-based compute grid. By combining the power of Azure’s cloud infrastructure with Ansys’ Redhawk-SC, AMD EPYC Processors; chip designers can accelerate their design iterations and achieve optimized results for complex electronic systems.