New York
San José
IBM Research has been at the forefront of the computing revolution from the start, with our researchers playing a role in some of the most important advancements, from the hard drive and the floppy disk to mainframes and the personal computer. We’ve been here since the earliest days of computing, and we’re leading the charge for what’s next.
Since our first lab opened in 1945, we’ve authored more than 110,000 research publications. Our researchers have won six Nobel Prizes, six Turing Awards, and IBM has been granted more than 150,000 patents.
We are the home of computing. We choose the big, urgent, mind-bending work that endures and shapes generations. We’re a group of researchers, scientists, technologists, engineers, designers, and thinkers inventing what’s next in computing.
The advances in computing over the next decade will lead to profound consequences for our civilization. We’re discovering new materials that will be used in the next generation of computer chips; we’re building bias-free AI that can take the burden out of business decisions; we’re designing a hybrid-cloud platform that operates as the world’s computer. We’re moving quantum computing from theory to systems that are redefining the world.
At IBM Research we live by the scientific method. It’s at the core of everything we do. We choose impact over market cycles, vision over vanity. We deeply believe that creative freedom, excellence, and integrity are essential to any breakthrough. We operate with a backbone. We don’t cut corners. We take responsibility for technology and its role in society. We make decisions with a conscience — for a future that we believe is worth living in. We recognize the immense power and potential of computing — not as a commodity, but as an agent of progress and connection.
This is the future, built right.
IBM Research - Almaden is IBM's Silicon Valley innovation lab. Scientists, computer engineers and designers at Almaden are pioneering scientific breakthroughs across disruptive technologies including artificial intelligence, healthcare and life sciences, quantum computing, blockchain, storage, Internet of Things and accessibility.
This work builds on a rich history of breakthroughs at Almaden’s unique campus in San Jose, California that include the distributed relational database, the ability to position individual atoms, the first data mining algorithms and innovations in data storage technology.
The Thomas J. Watson Research Center includes facilities in Yorktown Heights and Albany, New York as well as Cambridge, Massachusetts. It serves as the headquarters of IBM Research – one of the largest industrial research organizations in the world – with 12 labs on six continents. Scientists at T.J. Watson, and at IBM labs around the globe, are pioneering scientific breakthroughs across today’s most promising and disruptive technologies including the future of artificial intelligence, blockchain and quantum computing.
Our scientists collaborate across disciplines to address some of the world’s most complex problems and promising opportunities. We believe that profound breakthroughs come when businesses, governments, academic institutions and others work together to tap into diverse points of view and expertise. Collectively, we’re working to understand how systems are interconnected and the role technology plays within them.
The IBM Analog AI team is global team focusing on advancing the forefront of in-memory computing technologies to overcome the Von-Neumann Bottleneck. By performing vector-matrix multiplication (VMM) at the location where data is stored, we save time and energy by reducing data communication between memory and the compute unit. This efficient VMM computation is particularly powerful in deep neural network (DNN) applications, where the VMMs can tolerate a certain level of errors, for example, due to reduced-precision or noisy computations, without effecting accuracy of the DNN model. The team demonstrated near software-equivalent DNN accuracies both in phase change memory (PCM) hardware and in software simulations. We published high profile papers with >7000 citations since 2015 and authored numerous patents every year.
The Analog AI global team includes members from Almaden/California, Yorktown/New York, Albany/New York, Tokyo, and Zurich. The Almaden Research team holds technical leadership roles in a broad range of topics, including PCM hardware testing, circuit design, algorithm simulations, application exploration, architecture definition, and software development. The team has hosted many intern students and visiting scholars in the past from around the world, including US, Europe, Brazil, Japan, and Taiwan. Some students are now regular employees at IBM and many continued to advance their careers in related fields.
As part of the IBM Research Semiconductor team, you will conduct world-class research on AI Hardware using in-memory computing for Deep Neural Network acceleration. The IBM Almaden Analog AI team published an in-memory computing (IMC) chip with phase change memory (PCM) integrated in the metal stack on top of 14nm CMOS circuitry for Deep Learning acceleration. (reference: https://www.nature.com/articles/s41586-023-06337-5) This chip, containing 1 million PCM devices per tile and 34 tiles per chip, is an important stepping stone towards a scalable and configurable architecture. (reference: https://ieeexplore.ieee.org/abstract/document/9957094)
In this internship, the student will develop software stack and/or hardware components for an in-memory computing architecture. Research could include Deep Neural Network accuracy simulations, hardware demonstrations, and power performance modeling. The student will gain hands-on experience implementing DNN workloads, such as CNN and Transformers, for deployment onto the IMC chip and in simulation using the AIHWKIT. (reference: https://github.com/IBM/aihwkit)
From June 16 to September 5, 2025 (adjustable at the discretion of the organisation)
The department of AI for EDA is responsible for development and integration of AI and ML algorithms with applications in Electronic Design Automation. Hardware design is suffering from a talent gap, where the number of new hardware engineers cannot keep up with the ever-increasing demand for faster and more powerful chips. As a result, hardware engineers are under pressure to design more complex and more powerful hardware in shorter periods of time. The hardware design field is ripe for AI disruption to assist hardware engineers with improved productivity and faster design cycles. AI for EDA department at IBM Research responsibilities cover the entire spectrum of design and takes advantage of technologies such as Deep Learning, Graph Neural Networks, Transformers, Large Language Models and Reinforcement Learning. Current projects include application of Large Language Models for coding and assisting chatbots,distributed optimization for digital design flow and analog/mixed signal circuit parameters, graph neural networks for prediction of design behavior and reinforcement learning for macro-placement. These technologies can assist hardware designers in many steps of their design workflow to improve their productivity.
The members of the AI for EDA team are located at IBM Almaden Research Center.They have experience and expertise in a broad range of topics including Computer Vision, Natural Language Processing, Distributed Machine Learning and Generative AI, in addition to Electronic Design Automation.
In this internship, the student will work alongside hardware designers to develop AI-based solutions with applications in the field of Electronic Design Automation (EDA). The research includes using different AI technologies such as GNNs, Reinforcement learning and transformers.
From June 16 to September 5, 2025 (adjustable at the discretion of the organisation)
Computers have never been more important to the world. At IBM Research, we’re designing new systems that provide flexible, secure computing environments — from bits to neurons and qubits. We’re working on innovations in hybrid cloud infrastructure, operating systems, and software. Our goal is to create technologies that improve performance, security, and ease of use across hybrid and multi-cloud computing. We want to enable clients to dynamically compose best-of-breed services and applications freely and frictionlessly across distributed computing environments and accelerate data-driven innovations.
More: https://research.ibm.com/hybrid-cloud
Hybrid cloud infrastructure research, including application workload performance analysis/modeling, system software/cloud technology, and system server hardware co-design.
From June 9 to August 31, 2025 (adjustable at the discretion of the organisation)
Computers have never been more important to the world. At IBM Research, we’re designing new systems that provide flexible, secure computing environments — from bits to neurons and qubits. We’re working on innovations in hybrid cloud infrastructure, operating systems, and software. Our goal is to create technologies that improve performance, security, and ease of use across hybrid and multi-cloud computing. We want to enable clients to dynamically compose best-of-breed services and applications freely and frictionlessly across distributed computing environments and accelerate data-driven innovations.
More: https://research.ibm.com/hybrid-cloud
Hybrid cloud infrastructure research, including application workload performance analysis/modeling, system software/cloud technology, and system server hardware co-design
From June 9 to August 31, 2025 (adjustable at the discretion of the organisation)
Computers have never been more important to the world. At IBM Research, we’re designing new systems that provide flexible, secure computing environments — from bits to neurons and qubits. We’re working on innovations in hybrid cloud infrastructure, operating systems, and software. Our goal is to create technologies that improve performance, security, and ease of use across hybrid and multi-cloud computing. We want to enable clients to dynamically compose best-of-breed services and applications freely and frictionlessly across distributed computing environments and accelerate data-driven innovations.
As the foundational technologies for composability, e.g., CXL and PCIe Gen5/6, become ready, we plan to re-evaluate the related technology to understand tradeoffs between the performance impacts and the flexibility, specifically on distributed AI workloads. This job will allow the participant to try out multiple external enclosures from different vendors with high-end datacenter-grade accelerators. The comprehensive evaluation result, including the performance and limitation of the existing solutions, could be valuable to many people in the academic and industry, and could be published as paper. The participant may also further develop a resource management mechanism and/or investigate the potential security concerns of existing composable solutions.
From June 9 to August 31, 2025 (adjustable at the discretion of the organisation)
Computers have never been more important to the world. At IBM Research, we’re designing new systems that provide flexible, secure computing environments — from bits to neurons and qubits. We're working on innovations in hybrid cloud infrastructure, operating systems, and software. Our goal is to create technologies that improve performance, security, and ease of use across hybrid and multi-cloud computing. We want to enable clients to dynamically compose best-of-breed services and applications freely and frictionlessly across distributed computing environments and accelerate data-driven innovations.
More: https://research.ibm.com/hybrid-cloud
One of the open issues on distributed AI model training is how to deal with unexpected hardware failure. This job is to investigate the state-of-art AI frameworks, e.g., PyTorch, and propose a mechanism to deal with common hardware failures such as GPU or node failure. We may implement a proof-of-concept prototype and test it with state-of-art GPU systems to validate the proposed solution. The result can be published as a paper, and the PoC source code may also contribute to the open-source community.
From June 9 to August 31, 2025 (adjustable at the discretion of the organisation)
IBM Research has a dedicated focus on Accelerated Discovery of Sustainable Materials, and one facet of that work is an active research effort in energy storage, particularly batteries. The work is primarily centered at IBM Research – Almaden in San Jose, CA, in collaboration with IBM’s global research team. We combine both experimental and cutting-edge computational technologies including artificial intelligence to explore, develop, and validate new, more sustainable battery materials and chemistries capable of supporting clean transportation and renewable energy infrastructure.
We are seeking a highly motivated and talented undergraduate or graduate student to join our research team focused on developing innovative battery technologies using generative AI. This internship will provide you with a unique opportunity to contribute to cutting-edge research and gain hands-on experience in the field of AI and materials science. The intern will work on a research project with the objective to fine-tune a Large Language Models (such as LLaMA or equivalent) for predicting suitable electrolytes for a given anode-cathode system in battery applications. By leveraging the power of LLMs, we aim to expedite the electrolyte selection process and enhance the performance and safety of battery technologies. The project would constitute data collection on anode-cathode combinations and their compatible electrolytes, fine-tuning the appropriate LLM on the curated dataset using techniques like transfer learning and prompt engineering, and developing the prediction framework. The intern will collaboratively work with lab scientists to experimentally validate and demonstrate the successful application of LLMs in energy storage applications.
From June 16 to September 5, 2025 (adjustable at the discretion of the organisation)
New York
San José