The official conference app for ASPLOS '24: ACM International Conference on Architectural Support for Programming Languages and Operating Systems

<< See 51014 More Jobs Posted in Whova Events

Jobs Posted on the Whova Community Board of ASPLOS '24: ACM International Conference on Architectural Support for Programming Languages and Operating Systems

If you know anyone in the job market, feel free to share with them

ASIC Engineer, Machine Learning Architecture (PhD)
Meta
Meta Platforms Inc. is seeking an ASIC Engineer, Architecture to join our Infrastructure organization. This organization is responsible for building and maintaining the data centers that host all of our services - Facebook, Instagram, WhatsApp etc. These servers and data centers are the foundation upon which our rapidly scaling infrastructure efficiently operates and upon which our innovative services are delivered. In this role, you will be an integral member of an ASIC team to build accelerators for some of our top workloads enabling our data centers to scale efficiently. You will have an opportunity to work with AI/ML experts in the company, to evaluate algorithms, develop functional and performance models and help architect state-of-the-art machine learning ASICs. Come work and learn alongside our expert engineers to build “Green” data center accelerators.
ASIC Engineer, Machine Learning Architecture (PhD) Responsibilities
Implement and analyze algorithms and architectures targeting Machine Learning SoCs.
Analyze and map data center workloads to ASIC architecture
develop performance and functional models at different levels of abstraction to validate the architecture.
Work with a broad array of cross functional partners in ASIC design, verification, silicon bring-up, firmware and software development to ensure the successful production deployment of the SoCs.
Implement architecture documentation and other collateral necessary for the use and integration of your designs.
Minimum Qualifications
Currently has, or is in the process of obtaining, a PhD degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. Degree must be completed prior to joining Meta.
Currently has, or is in the process of obtaining a Bachelor’s degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.

(Additional info in the link)
Link: https://www.metacareers.com/v2/jobs/350378324577131/
Senior Researcher (GenAI)
Microsoft
At Azure Research -Systems, we are looking for a PhD grad with 2-3 years of experience in AI/GPU systems, or a fresh PhD grad with thesis in a similar direction.
We work across the stack on Generative AI cloud efficiency. More details at https://aka.ms/azrs under the project Efficient AI.

please send resumes to esha.choukse@microsoft.com, or catch me at the conference.
Real-Time Embedded OS Researcher PhD
Huawei Technologies Canada Co. Ltd.
At Huawei Technologies Canada we are looking for a PhD grad with technical experience of 2-3 years in embedded and RTOS kernels or a new PhD grad with thesis in a similar direction. We work on new OS architecture as well as performance and ensuring production programs are on the leading edge of delighting customers. More information on our job posting page at: https://huaweicanada.recruitee.com/o/researcher-realtime-embedded-os
Link: https://huaweicanada.recruitee.com/o/researcher-realtime-embedded-os
Senior Kernel Developer - RTOS
Huawei Technologies Canada Co. Ltd.
At Huawei Technologies Canada we are looking for a senior developer with expert technical experience of 4-8 years in embedded and RTOS kernels. We work on new OS architecture as well as performance and ensuring production programs are on the leading edge of delighting customers. More information on our job posting page at:
https://huaweicanada.recruitee.com/o/senior-kernel-developer-rtos
Link: https://huaweicanada.recruitee.com/o/senior-kernel-developer-rtos
Senior Software Performance Engineer
Zoox
Zoox is building advanced self-driving hardware and software solutions. The Software Core Performance team’s mission is to analyze, optimize, and provide guidance to the software and hardware teams to meet expected system performance targets. To attain the utmost efficiency that the system demands, we need an expert who understands both compute hardware architecture as well as the algorithms and middleware that run on it.
Link: https://zoox.com/careers/e372b5b5-fc1c-44e1-8012-0878a4615ce6
Sr Staff Engineer - Programming Systems
Uber Technologies Inc.
At Uber’s Programming Systems Group (PSG), we develop programming language (PL) techniques to enhance developer productivity and make our systems efficient and reliable. We leverage novel work on compiler optimizations, static and dynamic program analysis, performance tooling and optimizations, and generative AI as applied to developer tooling.

PSG members focus on solving real problems at scale for Uber developers across all languages and platforms. The team has a track record of innovative PL research (publications in PLDI, OOPSLA, ICSE, FSE, ASPLOS, CGO) and cutting-edge industry-standard open-source tools.
Link: https://www.uber.com/global/en/careers/list/128579/
Software Engineer Intern (GenAI/Compiler)
Uber Technologies Inc.
At Uber’s Programming Systems Group (PSG), we develop programming language (PL) techniques to enhance developer productivity and make our systems efficient and reliable. We leverage novel work on compiler optimizations, static and dynamic program analysis, performance tooling and optimizations, and generative AI as applied to developer tooling.

PSG members focus on solving real problems at scale for Uber developers across all languages and platforms. The team has a track record of innovative PL research (publications in PLDI, OOPSLA, ICSE, FSE, ASPLOS, CGO) and cutting-edge industry-standard open-source tools.
Software Engineer, Prinicpal - AI/ML Workloads
D-Matrix
What You Will Do:

The role requires you to be part of the team that helps productize the SW stack for our AI compute engine. As part of the Software team, you will be responsible for the development, enhancement, and maintenance of the development and testing infrastructure for next-generation AI hardware. You can build and scale software deliverables in a tight development window. You will work with a team of compiler, ML, and HW architecture experts to build performant ML workloads targeted for d-Matrix’s architecture. You will also research and develop forward looking items that further improve the performance of ML workloads on d-Matrix’s architecture.

What You Will Bring:

MS or PhD preferred in Computer Science, Electrical Engineering, Math, Physics or related degree with 12+ Years of Industry Experience.

Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals

Experience with mapping NLP models (BERT and GPT) to accelerators and awareness of trade-offs across memory, BW and compute

Proficient in Python/C/C++ development in Linux environment and using standard development tools

Experience with deep learning frameworks (such as PyTorch, Tensorflow)

Self-motivated team player with a strong sense of ownership and leadership

Desired:

Research background with publication record in top-tier ML/Computer architecture conferences

Prior startup, small team or incubation experience

Experience implementing and optimizing ML workloads and low-level software algorithms for specialized hardware such as FPGAs, DSPs, DL accelerators.

Experience with ML Models from definition to deployment including training, quantization, sparsity, model preprocessing, and deployment

Work experience at a cloud provider or AI compute / sub-system company

Experience implementing SIMD algorithms on vector processors
Link: https://jobs.ashbyhq.com/d-Matrix/9ded1ccc-d1d5-4816-9589-0fb09c13167e
Software Engineer, Staff - Kernels
D-Matrix
(Shortened description)

The role requires you to be part of the team that helps productize the SW stack for our AI compute engine. As part of the Software team, you will be responsible for the development, enhancement, and maintenance of software kernels for next-generation AI hardware. You possess experience building software kernels for HW architectures. You possess a very strong understanding of various hardware architectures and how to map algorithms to the architecture. You understand how to map computational graphs generated by AI frameworks to the underlying architecture. You have had past experience working across all aspects of the full stack tool chain and understand the nuances of what it takes to optimize and trade-off various aspects of hardware-software co-design.

Preferred:
Prior startup, small team or incubation experience.

Experience with ML frameworks such as TensorFlow and.or PyTorch.

Experience working with ML compilers and algorithms, such as MLIR, LLVM, TVM, Glow, etc.

Experience with a deep learning framework (such as PyTorch, Tensorflow) and ML models for CV, NLP, or Recommendation.

Work experience at a cloud provider or AI compute / sub-system company
Link: https://jobs.ashbyhq.com/d-Matrix/e8912eb5-f3be-46a5-badd-8f92f38dc30e
Computing for Brain-Computer Interfaces, Postdoc or Associate Research Scientist
Yale University
This is an exciting opportunity to advance computer architecture design and neuro-engineering simultaneously.

Neural interfaces (or brain-computer interfaces) directly read and modulate the activity of biological neurons. Enabling machines to “talk” to the human brain offers the potential to advance scientific discovery in the neurosciences and unlock new treatment options for crippling neurological disorders.

In the last two decades, “prototype” neural interfaces pioneered by neuroscientists,
neuroengineers, and electrical engineers have begun scratching the surface of these potentially revolutionary advances. But, these neural interfaces resemble the earliest computing machines in that they lack layers, abstractions, and interfaces. Truly delivering on the potential of neural interfaces now requires the building of principled and layered computer systems for neural interfaces so that target performance per watt is achieved while also enabling discovery of new neural decoding
and modulation algorithms.
AI/ML Compiler Engineer (full-time or internship)
Sapeon
About the role
---------------
The AI/ML Compiler Engineer is responsible for designing, developing, and maintaining compilers specifically for use in the field of artificial intelligence and machine learning. You will work closely with chip designers and software engineers to create efficient and optimized compilers that can run on AI-specific hardware.

What you'll do
---------------
Design and implement new compiler features for AI and machine learning applications
NPU compiling and backend optimization
Optimize computation graph
Computation resource allocation and scheduling
DRAM, scratch pad memory management
Support deep learning training and inference
Optimize and improve existing compilers for use on AI-specific hardware
Optimize development of quantization, calibration, quantization-aware training, etc.
Collaborate with chip designers and software engineers to understand and address their needs
Stay up-to-date with the latest advancements in compiler technology for AI and machine learning

Requirements
---------------
Bachelor’s in Computer Science or Computer Engineering required
Master’s or Ph.D. in a related field preferred
5+ years of working experience in compiler development required
3+ years of working experience with tensor compilers (e.g. TVM, MLIR) and NPU development required
Experience in neural network training, inference, and algorithms
Excellent understanding of computer architecture and optimization
Knowledge of AI-specific hardware is a plus
Strong analytical and problem-solving skills
Link: https://ats.rippling.com/sapeon-inc/jobs/0d1d8c5c-ea80-4ea7-b963-74736f7c1c03
Assistant Professor (System Software)
The University of Tokyo
I am looking for two assistant professors at the University of Tokyo (one with no term limit but needs to speak a little Japanese, the other does not necessarily need to speak Japanese but has a term until March 2027, depending on the budget) in the field of system software, such as operating systems and hypervisors. If you are interested, please contact me during the conference.
Link: https://www.os.is.s.u-tokyo.ac.jp/en/
Principal Researcher - Efficient AI
Microsoft Research
We are looking for a Principal Researcher with an inter-disciplinary background in AI, Distributed Systems and Privacy research including efficient training and inference of Large Language Models (LLMs), reliability of web scale cloud workloads and privacy mitigations in machine learning. You will be one of the key architects behind the vision and strategy of our research team with 40+ world class researchers. You will help drive cross-organizational execution to deliver strong company and academic impact.

Feel free to reach out to Srikant Bharadwaj during the conference for more details.
Link: https://jobs.careers.microsoft.com/global/en/job/1697094/Principal-Researcher
(Meta) PyTorch Compiler Team is hiring!
Meta
For a compiler person, PT2 is probably as good as it gets for a compiler project today—cutting-edge, open-source, technically deep, fast-moving, and end-user-relevant, with adjacent possibilities to many hot areas in the ML space.

If you like deep technical work and like to work with some of the most innovative minds in ML compilers, this team is a rare gem.

We are hiring frontline managers and SWEs at all levels, especially Ph.D. with specialties in ML compilers, GPU performance, and distributed training. If you are interested, please apply.

SWE JD: https://lnkd.in/dgf_bk8b (we share the JD with the MTIA team, please indicate your interests to PyTorch Compiler in the application)
EM JD: https://lnkd.in/dxQACcmd
Link: https://www.metacareers.com/v2/jobs/2587807648062439/
CPU Researcher
AMD Research
Full-time position with AMD's CPU Research team. Looking for a highly motivated, PhD graduate or equivalent experience researcher with passion for innovation and learning. CPU (u)architecture is a very active research area influenced by emerging and current workloads, new technology trends and a diverse SW stack that supports all market segments.
Link: https://www.linkedin.com/posts/gabriel-gabe-loh-78ab4459_mts-silicon-design-engineer-in-cambridge-activity-7173341907748577280-dXmr?utm_source=share&utm_medium=member_desktop
CPU Research Fall 2024 Coops
AMD Research
Coop positions for the Fall 2024 term with AMD's CPU Research team. Looking for highly motivated and passionate PhD-level students to develop innovative ideas that will influence next-generation Zen CPU cores and their corresponding memory hierarchies. Join and collaborate with talented researchers. Learn from expert CPU architects.
Principal Researcher – Systems & AI
Microsoft
We are looking for a Principal Researcher to design and develop novel machine learning and algorithmic solutions for our cloud infrastructure with a singular purpose of making them scalable, fast, reliable, and efficient. The ideal candidate will have a strong background in machine learning or systems research and the ambition to apply them to large scale production systems
Link: https://jobs.careers.microsoft.com/global/en/job/1704888/Principal-Researcher-%E2%80%93-Systems-%26-AI
Principal Research Lead
Microsoft
We are looking for a Principal Researcher with an inter-disciplinary background in AI, Distributed Systems and Privacy research including efficient training and inference of Large Language Models (LLMs), reliability of web scale cloud workloads and privacy mitigations in machine learning. You will be one of the key architects behind the vision and strategy of our research team with 40+ world class researchers. You will help drive cross-organizational execution to deliver strong company and academic impact.
Link: https://jobs.careers.microsoft.com/global/en/job/1697094/Principal-Researcher
Staff Engineer, CPU Researcher
Samsung Electronics
Full-time position with Samsung America, SAIT (Samsung Advanced Institute of Technology). We're looking for a highly motivated, PhD. graduate or experienced researcher with a passion for innovation and learning.
We're shaping the future of CPU processor and SoC architecture for the most demanding applications of the future like AI and HPC. We are building the foundation of processors and the related platform which are applied to various business targets of Samsung in the future. For more practical and sustainable architecture, we emphasize that the CPU programming core should be the center of technology to run all of heterogenous computing engines with complex software under easy-to-use programming environment. It requires innovative ways how to couple tightly with hardware engines as well as maximizing the efficiency and performance of CPU processor.
Link: https://boards.greenhouse.io/samsungsemiconductor/jobs/5898981003
Laboratory Director - Future Computing Network Systems
Huawei - Zurich Research Center
Responsibilities (Future Computing Network Systems - Research Leader)

Conduct research on network topology, designing network topology for AI LLM (artificial intelligence large language model) training/inference, cloud computation, high performance computation and so on.

Take a leading role in the team and work with internal research colleagues and academic research partners to achieve new breakthroughs in future computing network system research and be responsible for critical prototype design and development.

Produce and present research papers at internationally leading conferences and white papers on current developments and future directions on network topology, network theoretical research, networking systems design for AI computation.

Develop academic research partnerships and cooperation with leading universities and professors in the area. Where appropriate, contribute insight and research expertise to committees and other organizations that are looking to establish new industry standards and platforms.

Contribute to the research and academic community through service such as conference program committee membership, membership of journal editorial boards etc.

Requirements

PhD in an area related to networking systems, computer architecture or theoretical computer and more than 15 years’ experience on network topology design/research in related large companies or in academia.
Record of publishing research papers in the area of AI training systems or high performance computation, especially focus on network topology design.
Link: https://careers.huaweirc.ch/jobs/3167234-laboratory-director-future-computing-network-systems
Computing and AI Systems Researchers
Huawei - Zurich Research Center
For the Computing Systems Laboratory, we are hiring both Senior Research Leaders and Rising Stars for multiple permanent positions in: 

Future Computing and AI Systems (Hardware, Software, Algorithms)

Responsibilities

Conduct fundamental research on new directions in computing systems

Develop academic research partnerships and cooperation with leading universities and professors in the area

Work with internal research colleagues and academic research partners to achieve new breakthroughs in research and innovation

Produce and present research papers at internationally leading conferences and events

Produce white papers on current developments and future directions in computing systems

Where appropriate, contribute insight and research expertise to committees and other organizations that are looking to establish new industry standards and platforms

Contribute to the research and academic community through service such as conference program committee membership, membership of journal editorial boards etc.

Requirements

PhD in an area related to computing systems, or equivalent research experience in industry

Record of publishing research papers in the area of computing systems

Strong interpersonal skills and ability to work productively in a research environment

Candidates should have research experience in computing systems, and be familiar with at least one of the following areas:

AI Accelerators: Training and Inference

Massively Parallel Heterogeneous AI SuperClusters: Hardware, Software, Algorithms

AI Software Frameworks: Compilers, IRs, and Interoperability

New Compiler Technologies and MLIR

Large Language Models: Hardware, Software, Algorithms

Parallel and Distributed Programming Models and Software Platforms

High Performance Interconnects: New Protocols, Hardware and Software

Hardware Security

Graph Computing and HPC Systems
Link: https://careers.huaweirc.ch/jobs/3221084-computing-and-ai-systems-researchers
CPU Architecture Researcher
Futurewei
Full-time position with Futurewei Technologies’ CPU Research team.
Looking for a CPU architecture researcher with a PhD in related fields of study or equivalent experiences.
Qualified candidates will have career opportunities in research projects related with, but not limited to, the core architecture and microarchitecture, multicore architectures including the heterogeneous and hybrid systems, memory architecture, interconnect and SOC, and emerging technologies and applications to publish results at premier computer conferences.
Link: None
Applied scientist in all levels
Amazon
We are looking for applied scientists in ML systems in general. We are looking for systems scientists familiar with ML workloads. Successful candidates will be working on projects related to LLM training and inference on AWS Trainium/Inferentia. Good research/publication background will be highly preferred. We welcome all levels of applicants. Locations include Bay Area and Seattle. An L5 job link is posted below.
Link: https://www.amazon.jobs/en/jobs/2530338/applied-scientist-ai-research-education
Post Docs - Systems & AI
Microsoft
We are looking for a Post Docs to design novel solutions for our cloud workloads (both general purpose and ML) with a singular purpose of making them scalable, fast, reliable, and efficient. The ideal candidate will have a strong background in systems research or machine learning and the ambition to apply them to large scale production systems.
Link: https://www.microsoft.com/en-us/research/group/systems-innovation/publications/
Machine Learning Compiler and Performance Engineer
Qualcomm Canada
Today, more intelligence is moving to end devices, and mobile is becoming the pervasive AI platform. Building on the smartphone foundation and the scale of mobile, Qualcomm envisions making AI ubiquitous—expanding beyond mobile and powering other end devices, machines, vehicles, and things.

We are inventing, developing, and commercializing power-efficient on-device AI, edge cloud AI, and 5G to make this a reality.

As a member of Qualcomm’s ML Systems Team, you will participate in two activities:

1. Development and evolution of ML/AI compilers (production and exploratory versions) for efficient mappings of ML/AI algorithms on existing and future HW
2. Analysis of ML/AI algorithms and workloads to drive future features in Qualcomm’s ML HW/SW offerings
Link: https://careers.qualcomm.com/careers/job/446696160715
<< See 51014 More Jobs Posted in Whova Events