About us
Securely delivering planet-scale systems and infrastructure that powers Google and Google Cloud Platform.
About us
Securely delivering planet-scale systems and infrastructure that powers Google and Google Cloud Platform.
Our focus areas
-
SystemsResearch@Google (SRG)
Started in 2021, SystemsResearch@Google (SRG) is a new research team positioned in the heart of Google technical systems...
-
Hardware/computer architecture
Google data center and hardware infrastructure underlies the global-scale computing that powers...
-
Programming languages/compilers
From assembly to JITs, shell scripts, C++, and even configuration, Programming Languages are at the interface between humans and...
-
Operating systems
Google has continuously invested in the development of highly performant, fault-tolerant, and efficient...
-
Compute
The term “Compute” at Google refers to the system software layers that make Google Search, YouTube, Google Meet,...
-
Storage systems
Managing state when writing Internet scale applications can be hard. People generally expect storage systems to be available (reads...
-
Computer networks
Networking can be central to modern computing, from WANs connecting cell phones to large data stores, to the data-center...
SystemsResearch@Google (SRG)
Started in 2021, SystemsResearch@Google (SRG) is a new research team positioned in the heart of Google technical systems and infrastructure engineering organization. SRG’s mission is to shape the future of hyperscaler systems design for Google by inventing, incubating, and infusing new concepts, designs, and technologies into Google applications, systems, and data centers. The team’s position can allow integrated engagement with engineering and product teams, enabling joint exploration in concert with transformative workloads. Beyond Google, the SRG team will look to forge strong relationships with external research communities working on pressing systems-research problems.
SRG will focus on research in support of fundamental advances in security, reliability, programming models, data analysis, systems for machine learning, networking, storage systems, hardware architecture, and software systems. The team is co-led by David Culler and Hank Levy. SRG helps bring together leading systems thinkers from around the world and inside Google. SRG is located in sites in Google’s Bay Area and Seattle facilities.
Hardware/computer architecture
Google data center and hardware infrastructure underlies the global-scale computing that powers Google services (and, through Google Cloud, many other organizations’ applications and services around the world as well). Google advances in hardware and computer architecture are what has enabled many of our innovations (for example, the use of TPUs for new machine learning applications).
Designing and deploying our infrastructure, especially at Google scale, poses significant challenges: How do we deliver high performance at low cost, in a timely manner, while maintaining high reliability and security? Additionally, the plateaus we are seeing in efficiency from the slowing of Moore’s Law (while demand continues to grow at unprecedented scales) pose a unique once-in-a-generation opportunity for new optimizations and innovations, across the stack: data centers, distributed systems, platforms hardware and chip design, and higher-levels of software-defined infrastructure. From new custom hardware accelerators for machine learning (TPU), video (VCU), security (openTitan), and other emerging domains, to rethinking hardware design for modularity (new “multi-brained” servers) and optionality (open-source hardware), we are reinventing hardware. At the same time, we are also holistically looking at our large-scale distributed systems across both hardware and software, investing in new compute-as-a-service offerings (including software-defined memory, software-defined power, etc) and in end-to-end low-latency stacks. We are active in the academic community and in open-source and standards bodies.
Programming languages/compilers
From assembly to JITs, shell scripts, C++, and even configuration, Programming Languages are at the interface between humans and the data center. Google has created new languages like Go (systems programming), XLS (accelerated hardware synthesis), and p4 (packet processing) to adapt to new paradigms, and has influenced the direction of C++ and other languages to help address the needs of data center development. Google's code spans C++, Java, Kotlin, Go, Python, TensorFlow, and more. We are continually developing automation to help refactor and evolve programs over time so the codebase can get cleaner, healthier, and faster over time. Google leverages C++ for data center compute, so we invest heavily in developing new technologies for compiler optimization and runtime libraries like garbage collection, memory allocation, and other code for performance. The foundation of our performance efforts is a data-driven approach to support incremental improvement, driven by Google-Wide Profiling and similar internal tools to understand the software and hardware attributes of the fleet at scale.
Operating systems
Google has continuously invested in the development of highly performant, fault-tolerant, and efficient Cloud computing environments at a scale. Google’s Technical Infrastructure Engineers have pioneered approaches such as containers (resource isolation), Borg (cluster management), Kubernetes, the Andromeda virtual network stack, the Andromeda virtual network stack, the Anthos multi-cloud environment, global-scale consensus, user-level messaging and more. Our research and development help define the systems-level abstractions that allow new technologies and specialized resources -- such as ML engines and TPUs -- to be introduced into a reliable global cloud infrastructure. Looking forward, the team sees opportunities within virtualization, distributed systems composition, and the incorporation of machine learning to be just a few of the places where we hope to bring new ideas to the systems research and open source communities.
Compute
The term “Compute” at Google refers to the system software layers that make Google Search, YouTube, Google Meet, Google Cloud Platform, and many other Google products and services work and work efficiently at a global scale. This software includes managing jobs, individual machines, clusters of machines, data centers with many clusters, and a portion of Google’s global network of data centers. Our challenges can include keeping Google Cloud customers and our users up and running by localizing failures; earning trust by helping to keep data private and secure; and being a good global citizen by minimizing resource usage. A key part of removing these challenges is instrumenting Google systems to create a data driven culture, where innovation at and across the layers makes solving hard problems possible.
Storage systems
Managing state when writing Internet scale applications can be hard. People generally expect storage systems to be available (reads and writes in real-time), secure (only authorized users have access), consistent ( gives the right data back), durable (lives on even when machines or whole datacenters fail), efficient (as cheap as possible), and somehow still easy to use.
There is perhaps no larger focused-purpose distributed system than today’s hyperscaler storage systems. At Google, we multiplex many applications onto a shared pool of hard disks and SSDs, helping to give them access to cheap bulk storage or fast high throughput storage through the same system at a turn of a dial. The scale, heterogeneity, and complexity of our storage systems can present challenges as well as opportunities for innovations, ranging from tuning an RPC’s end-to-end data checksumming, to automatic learning of cost-efficient data placement and capacity planning in a data center. We are focused on expanding Google technical stewardship in helping to provide safe, secure, and efficient data storage, while also exploring disruptive technologies. Hardware acceleration, new storage devices, improving insight, and machine learning integrations are just a few examples of how Google's Technical Infrastructure Engineers are branching out.
Computer networks
Networking can be central to modern computing, from WANs connecting cell phones to large data stores, to the data-center interconnects that deliver storage and fine-grained distributed computing. Because our distributed computing infrastructure is a key strategy for the company, Google has long focused on building network infrastructure to support its scale, availability, and performance needs, and to apply our expertise and infrastructure to solve similar problems for Cloud customers.
Google Networking Team’s combine building and deploying novel networking systems at unprecedented scale, with recent work focusing on fundamental questions around data center architecture, cloud virtual networking, and wide-area network interconnects. The Networking Team helped pioneer the use of Software Defined Networking, the application of ML to networking, and the development of large-scale management infrastructure including telemetry systems. The Networking Team also addresses congestion control and bandwidth management, capacity planning, and designing networks to meet traffic demands. We build cross-layer systems to ensure high network availability and reliability. By publishing our findings at premier research venues, we continue to engage both academic and industrial partners to help further the state of the art in networked systems.
Featured Staff
-
Amin Vahdat
VP of Systems and Infrastructure
-
Carrie Grimes Bostock
VP, Engineering Fellow
-
David Culler
Distinguished Software Engineer
-
Hong Liu
Google Fellow
-
Hank Levy
Distinguished Engineer
-
Arjun Singh
Area Tech Lead for Networking; Distinguished Engineer
-
Amber Huffman
Principal Engineer
-
Dr. Parthasarathy Ranganathan
VP, Engineering Fellow
-
Fatma Özcan
Principal Software Engineer
-
John Wilkes
Principal Software Engineer
-
Kathryn S. McKinley
Principal Scientist
-
Mike Dahlin
Distinguished Engineer
-
Nandita Dukkipati
Principal Software Engineer
-
Kim Keeton
Principal Software Engineer
-
Norm Jouppi
Distinguished Hardware Engineer
-
Steve Gribble
Distinguished Software Engineer
-
Maire Mahony
Principal Hardware Engineer
-
Uri Frank
Vice President, Engineering
Amin Vahdat
VP of Systems and Infrastructure
Amin Vahdat is a Fellow and Vice President of Engineering at Google, where his team is responsible for Engineering, Product Management, and UX for Compute (Google Compute Engine, Borg/Cluster Scheduling, and Operating Systems), Platforms (TPUs, Accelerators, Servers, Storage, and Networking), Network Infrastructure (Datacenter, Campus, RPC, and End Host network software), Cloud Networking (Google Compute Engine, NetLB, and Google Private Cloud ), Storage (Filestore, Google Cloud Storage, Backup and Disaster Recovery, and Transfer), and the Systems Research Group. Until 2019, he was the Area Technical Lead for Networking at Google, responsible for Google Technical Infrastructure roadmap in collaboration with peers in Compute, Storage, and Hardware. Vahdat is active in Computer Science research, with more than 41,000 citations to over 200 refereed publications across cloud infrastructure, software defined networking, data consistency, operating systems, storage systems, data center architecture, and optical networking.
In the past, he was the SAIC Professor of Computer Science and Engineering at UC San Diego and the Director of UCSD’s Center for Networked Systems. Vahdat received his PhD from UC Berkeley in Computer Science, is an ACM Fellow and a past recipient of the NSF CAREER award (2000), UC Berkeley Distinguished EECS Alumni Award (2019), the Alfred P. Sloan Fellowship (2003), the SIGCOMM Networking Systems Award (2018), and the Duke University David and Janet Vaughn Teaching Award (2003). Most recently, Amin was awarded the SIGCOMM lifetime achievement award (2020) for his groundbreaking contributions to data center and wide area networks.
Q: What makes you excited about your work today?
The computing industry is at an inflection point where we have an opportunity to revisit the fundamentals of what has led to current conventional wisdom and best practices for building computing infrastructure. I'm most excited about the innovation we're driving through our software and hardware developments – we have a unique opportunity to define what the architectural principles for infrastructure over the coming decades. So whether it's through machine learning capabilities, top-tier security, reliability, and availability, or the rise of Edge computing and sovereignty, we're providing our customers with the right foundation to leverage the transformative power of compute infrastructure. These principles for the infrastructure of the future will enable services that can improve our productivity, health, and happiness in ways that we cannot imagine today.
Carrie Grimes Bostock
VP, Engineering Fellow
Carrie Grimes Bostock graduated from Harvard with an A.B. in Anthropology/Archaeology in 1998, and an interest in quantitative methods for dealing with disparate data. She graduated from Stanford in 2003 with a PhD in Statistics after working with David Donoho on Nonlinear Dimensionality Reduction problems, and has been at Google since mid-2003.
Dr. Grimes Bostock spent many years leading a research and technical team in Search at Google trying to figure out what criteria make a search engine index "good," "fast," and "comprehensive" - and how to achieve those goals, before working in Technical Infrastructure at Google as a Technical lead for Fleet performance, deployment and optimization. Currently, she is working on enabling more innovative machine learning at scale for Google products and beyond.
David Culler
Distinguished Software Engineer
David Culler joined Google after 31 years at the University of California Berkeley pioneering extreme networked systems, from laying the foundations of clusters, Internet services and planetary scale systems to making low power embedded wireless sensor networks a reality. His work, represented in over 300 publications, 10 test-of-time awards, numerous “best papers”, 34 patents and the seminal textbook on parallel computer architecture, is reflected in his role in the National Academy of Engineering, where he serves on the Computer Science and Telecommunications Board and on several national studies. His academic career is punctuated with industrial phases, including Sun Microsystems, founding director of Intel Research Berkeley, and co-founding Arch Rock, now part of CISCO, and administrative roles, including Chair of EECS and founding Dean of the Berkeley Division of Data Sciences. His recent work brings network systems to the building environment to help address sustainability and resilience. David is an ACM Fellow, IEEE Fellow, recipient of the SIGMOBILE Outstanding Contribution Award and the Okawa Prize.
Q: What are some of the more challenging problems that you’ve had to solve?
I was hired to answer the question, “How do you accelerate the pace of innovation in systems?”, and we determined that we need to build a research group — one that is very different from traditional IT research. In order to continue innovating, we need to build the innovation engine that fits into this culture of delivering solutions 24/7, around the world. At this time in the industry, everything is changing. So we need to be able to create a new paradigm for industrial research. It’s not good enough to make things work for current uses. It will need to continue to work for those uses that have never been seen or controlled.
Hong Liu
Google Fellow
Hong Liu is currently a Google Fellow at Systems and Services Infrastructure, where she is involved in the roadmap, architecture and photonic innovation for Google’s datacenter networks and machine learning. Her research interests include high speed signaling, optical architecture and interconnection.
Before joining Google, Hong was a Member of Technical Staff at Juniper Networks, where she worked on the architecture and design of network core routers and switches. Hong received her Ph.D in electrical engineering from Stanford University, and is an Optica Fellow.
Hank Levy
Distinguished Engineer
Hank joined Google in 2020, where he is a Distinguished Engineer and co-leader (with David Culler) of SystemsResearch@Google. He is also Professor and Wissner-Slivka Chair in the Paul G. Allen School of Computer Science & Engineering at University of Washington (on leave). Hank led UW Computer Systems Engineering (CSE) for 14 years through a period of major growth, first as Department Chair and then as Founding Director of the Allen School.
Hank's research concerns operating systems, distributed systems, security, computer architecture, and hardware multithreading (SMT). The author of two books and more than 100 papers on computer systems design, his publications have earned nearly 20 best-paper awards and "test of time/influential paper" awards in systems and architecture. He is very proud of the 27 Ph.D. students he was fortunate to be able to learn from and advise.
Hank is a Member of the National Academy of Engineering, a Fellow of the ACM, a Fellow of the IEEE, and recipient of a Fulbright Research Scholar Award. He is former chair of ACM SIGOPS and program chair of the SOSP, OSDI, ASPLOS, and HOTOS conferences. He launched his early career at Digital Equipment Corporation (DEC), an early computer manufacturer, where he worked on commercial operating systems, system architecture, and early clustered computing. He was also co-founder of several startups with UW colleagues.
Q: Why would a recent grad want to work at Google?
There are multiple reasons. The talented people and the open environment make it possible to learn about what may interest them. The scale of Google infrastructure can be mind blowing. And the breadth of Google products – including its web services, its collection of mobile devices, and Google Cloud – provides a wide-range of often challenging and fascinating workloads and technologies.
Arjun Singh
Area Tech Lead for Networking; Distinguished Engineer
Arjun is a Distinguished Engineer and Technical Lead for networking in Google. During his tenure in Google, he has worked on developing solutions for Google data center, Wide Area and Edge/Peering networks with a focus on Software-defined networking. Arjun has participated in five generations of data center and wide area networking infrastructure at Google over 16 years and has been recognized with the ACM SIGCOMM Networking Systems Award (2021) for his work. Before joining Google, Arjun received a PhD, M.S. in Electrical Engineering from Stanford University and a Bachelor of Technology in Computer Science and Engineering from the Indian Institute of Technology (IIT), Kharagpur.
Q: What makes Google unique from other systems companies?
Highly performant, reliable and efficient systems can only be possible with a culture of deep collaboration between compute, storage and networking along with vertical integration with application teams, something we strive hard to enable within Google. The tight feedback loop we have from applications, service owners, and customers provides early input on the impact of a new technology to the end user.
Amber Huffman
Principal Engineer
Amber Huffman is a Principal Engineer in Google Cloud responsible for driving Google’s industry engagement in the data center hardware ecosystem to enable easy integration of a broad array of technologies into Google’s data centers including servers, storage, networking, accelerators, power, cooling and security. Amber serves as the President of NVM Express, as a Board member for the Universal Chiplet Express Interconnect, and as the co-chair of the Open Compute Foundation Storage Project.
A respected authority on storage, memory and IO architecture, Huffman has a track record of standards and ecosystem development including NVM Express (NVMe), Open NAND Flash Interface (ONFI), Serial ATA, and Advanced Host Controller Interface (AHCI) where she served as lead author and editor. Before joining Google in 2021, Huffman was a Fellow and VP at Intel Corporation where her last role was Chief Technologist in the IP Engineering Group.
Dr. Parthasarathy Ranganathan
VP, Engineering Fellow
Partha Ranganathan is currently a technical Fellow at Google where he is the area technical lead for hardware and data centers, designing systems at scale. Prior to this role, he was a HP Fellow and Chief Technologist at Hewlett Packard Labs where he led their research on systems and data centers. Partha has worked on several interdisciplinary systems projects with broad impact on both academia and industry, including widely-used innovations in energy-aware user interfaces, heterogeneous multi-cores, power-efficient servers, accelerators, and disaggregated and data-centric data centers. He has published extensively (including being the co-author on the popular "Datacenter as a Computer" textbook) and is a co-inventor on more than 100 patents His work has often been featured in the popular press, including the New York Times, Wall Street Journal, San Francisco Chronicle, etc. Partha is also active in teaching (e.g., at Stanford) and mentoring (e.g., Google TechAdvisors) and is active in the broader community (e.g., serving on the executive team for ACM SIGARCH and his local school district foundation). He has been named a top-15 enterprise technology rock star by Business Insider, one of the top 35 young innovators in the world by MIT Tech Review, and is a recipient of the ACM SIGARCH Maurice Wilkes award, Rice University's Outstanding Young Engineering Alumni award, and the IIT Madras distinguished alumni award. He is also a Fellow of the IEEE and ACM, and is currently on the board of directors for OpenCompute.
Q: What are some of the more challenging problems that you’ve had to solve?
The most exciting system problems these days have to do with scale. We get scale because of the immense workload that we support: Websearch, YouTube, Knowledge Graph, Google Assistant, Android, Chrome, Maps, Cloud. Each one of these businesses alone runs at a really large scale. We get to look at the underlying infrastructure.
Fatma Özcan
Principal Software Engineer
Fatma Özcan is a Principal Engineer at Systems Research@Google. Before she came to Google, she was a Distinguished Research Staff Member and a senior manager at IBM Almaden Research Center. Hercurrent research focuses on platforms and infra-structure for large-scale data analysis, query processing and optimization of semi-structured data, and democratizing analytics via NLQ and conversational interfaces to data.
Dr Özcan got her PhD degree in computer science from University of Maryland, College Park, and her BSc degree in computer engineering from METU, Ankara. She has over 21 years of experience in industrial research, and has delivered core technologies into IBM products. She has been a contributor to various SQL standards, including SQL/XML, SQL/JSON and SQL/PTF.
Dr Özcan is the co-author of the book "Heterogeneous Agent Systems" and is also co-author of several conference papers and patents. She received the VLDB Women in Database Research Award in 2022. She is an ACM Distinguished Member, and the vice chair of ACM SIGMOD. She has been serving on the board of directors of CRA (Computing Research Association) since 2020, and is a steering committee member of the CRA-Industry.
John Wilkes
Principal Software Engineer
John Wilkes has been at Google since 2008, where he is working on automation for building warehouse scale computers, with a current focus on delivering network capacity. Before this, he worked on cluster management for Google compute infrastructure (Borg, Omega, Kubernetes). He is interested in many aspects of distributed systems, but a recurring theme has been technologies that allow systems to manage themselves.
He received a PhD in computer science from the University of Cambridge, joined HP Labs in 1982, and was elected an HP Fellow and an ACM Fellow in 2002 for his work on storage system design. Along the way, he’s been program committee chair for SOSP, FAST, EuroSys, and HotCloud, and has served on the steering committees for EuroSys, FAST, SoCC, and HotCloud. He’s listed as an inventor on 50+ US patents, and has an adjunct faculty appointment at Carnegie-Mellon University. In his spare time he continues, stubbornly, trying to learn how to blow glass.
Q: What makes you excited about your work today?
I like that my work addresses an important problem. Networking is critical to running the business and serving the users of Google. The work is deeply technically challenging and the ecosystem is organizationally complicated and incredibly rich – which means that I’m continually being stretched (in a good way!) and yet still have opportunities to grow and learn.
Kathryn S. McKinley
Principal Scientist
Kathryn S. McKinley is a Principal Research Scientist at Google, where she designs engineering systems customized to GCE customer workloads for performance and a transparent capacity experience. She leads teams that focus on infrastructure for industry leading price performance products that use Google and the world’s resources wisely. Her expertise spans cloud and parallel systems, with a focus on memory technologies. Prior to joining Google she was a Principal Researcher at Microsoft, and an Endowed Professor at the University of Texas at Austin, where her research groups produced technologies that influenced industry and academia. For instance, they produced the industry leading DaCapo Java Benchmarks and benchmarking methodologies; Hoard, the first scalable and probably memory efficient memory manager, adopted by IBM and Apple’s OS X; and Immix, the first of a novel mark-region high performance garbage collection family, in use by Jikes RVM, Haxe, Rubinius, Scala, and others. Her research excellence has been recognized by numerous test-of-time and best paper awards. She is an IEEE Fellow and ACM Fellow. Kathryn is passionate about inclusion and equity in computing. In 2018, she co-founded ACM CARES committees, a new type of resource to combat sexual harassment and discrimination in the computing research community As a Computing Research Association (CRA) board member and CRA-WP board member and co-chair, she participates in and leads programs to increase the participation of women and under-represented groups in computing.
Q: What makes you excited about working here?
The scale and impact of Google services give me a unique opportunity to use the world's resources wisely and improve the productivity of an enormous number of people
Mike Dahlin
Distinguished Engineer
Dr. Mike Dahlin is a Distinguished Engineer at Google where he works as the Area Technical Lead for Google Compute Engine and for Borg, Google’s internal compute node infrastructure. In his 9 years working on the Google Cloud Platform, he has led projects ranging from cross-product efforts to improve its reliability, efficiency, and geo expansion to focused efforts like Google Cloud Engine E2 VMs to help provide customers with price-performance for general purpose workloads.
Before joining Google, Mike was a Professor of Computer Science at the University of Texas at Austin, where his research focused on distributed systems, data replication, and fault tolerance. He also co-authored the textbook, Operating Systems: Principles and Practice.
Mike has been recognized for advances in distributed systems and operating systems research and development as an ACM Fellow and an IEEE Fellow. He is a past recipient of an NSF CAREER award, Alfred P. Sloan Fellowship, and University of Texas at Austin Faculty Fellowship in Computer Science.
Q: What makes you excited about working here?
What attracted me to Google nine years ago was that they were defining this Cloud architecture, the set of cloud services that will define how people will do computing for the next several decades. And what’s been further exciting is that this is not a static answer. The abstractions we were defining nine years ago formed the baseline layer of a cloud infrastructure. Now we can build on that baseline layer and provide higher-level, more powerful abstractions.
Nandita Dukkipati
Principal Software Engineer
Nandita is a Principal Software Engineer in Host Networking where her focus is on Low Latency Networking, Congestion Control, and Telemetry systems. Her team is responsible for delivering end-to-end network performance for applications by making use of network bandwidth, smart scheduling, providing end-to-end visibility into application behavior, and making congestion control work at scale. She has published award-winning publications in conferences with numerous contributions to Networking and Systems. Nandita received her PhD from Stanford University in Electrical Engineering in 2008.
Q: Why would a recent grad want to work at Google?
No matter what part of the organization you’re in, you can meet people who are interested in solving the problem by moving the needle in a fundamental way (as opposed to putting in hacks). The enthusiasm of your colleagues and the positive environment of “anything is possible” has a multiplicative effect on us.
Kim Keeton
Principal Software Engineer
Kimberly Keeton is a Principal Software Engineer in the SystemsResearch@Google group, working to invent and incubate new technologies to support Google's hardware and software infrastructure. Her recent research focuses on memory efficiency and novel memory technologies, including persistent memory and memory disaggregation.
Before joining Google, Kim was a Distinguished Technologist at Hewlett Packard Labs where she investigated how to improve the manageability, dependability, and usability of large-scale storage and information systems, and how these systems can exploit emerging technologies, such as persistent memory, to improve functionality and performance. Kim’s work was among the first to automate the design of these large-scale storage systems to meet performance and dependability goals (for example, minimizing recovery time and data loss). Her work has led to numerous publications and granted patents that have received multiple test-of-time and best paper awards and which have contributed to multiple products.
Kim received her PhD and MS in Computer Science from the University of California at Berkeley and her BS in Computer Engineering and Engineering and Public Policy from Carnegie Mellon. She is a Fellow of the ACM and the IEEE, a UC Berkeley EECS Distinguished Alumna, and a former program chair for OSDI, EuroSys, SIGMETRICS, FAST and DSN Performance and Dependability Symposium. She has served as an industrial advisor to university research groups at Carnegie Mellon, ETH-Zurich, and UC Berkeley. In her spare time, she sings with the Grammy-nominated chorus, Pacific Edge Voices.
Norm Jouppi
Distinguished Hardware Engineer
Norman P. Jouppi is a Google Fellow. He is the tech lead for Google Tensor Processing Units (TPUs). Norm is known for his innovations in computer memory systems, was the principal architect and lead designer of several microprocessors, contributed to the architecture and design of graphics accelerators, and extensively researched telepresence. His innovations in microprocessor design have been adopted in many high-performance microprocessors.
Norm received his Ph.D. in electrical engineering from Stanford University in 1984, and a master of science in electrical engineering from Northwestern University in 1980. While at Stanford, he was one of the principal architects and designers of the MIPS microprocessor, and developed techniques for MOS VLSI timing verification. He joined HP in 2002 through its merger with Compaq, where he was a Staff Fellow at Compaq’s Western Research Laboratory (formerly DECWRL) in Palo Alto, California. In 2010 he was named an HP Senior Fellow. From 1984 through 1996, he was a consulting assistant/associate professor in the electrical engineering department at Stanford University where he taught courses in computer architecture, VLSI, and circuit design.
Norm holds more than 125 U.S. patents. He has published over 125 technical papers, with several “best paper” awards and two International Symposium on Computer Architecture (ISCA) Influential Paper Awards. He is the recipient of the 2014 IEEE Harry H. Goode Award and the 2015 ACM/IEEE Eckert-Mauchly Award. He is a Fellow of the ACM, IEEE, and AAAS, and a member of the National Academy of Engineering.
Steve Gribble
Distinguished Software Engineer
Steve Gribble is a Distinguished Software Engineer and TLM at Google, where he builds host-side networking software and SDN systems that make Google-scale networks high-performance, available, debuggable, and easy to operate. In the past, Steve was a computer scientist and full professor in the University of Washington’s Department of Computer Science & Engineering. Steve had joined UW in November of 2000, after receiving his Ph.D. from UC Berkeley under Professor Eric Brewer.
In 2006, Steve co-founded SkyTap, which provides cloud-based software development, test, and deployment platforms. As well, in 1996 Steve co-founded ProxiNet, Inc., a company that built graphical web browsers for wireless Palm Pilot PDAs using scalable cloud infrastructure to optimize and render web content. ProxiNet was acquired by Pumatech in 1999.
Q: What is your definition of the word “System”?
I think of systems as the technical discipline of integrating hardware, software, and distributed systems building blocks to solve a complex problem. Some examples of exciting systems projects I've worked on at Google include: building scalable and reliable data center networks; integrating hardware accelerators into our cloud network and storage systems; and, building record, replay, and simulations systems that can permit Google to capture and troubleshoot operational issues in production networks. A Systems engineer can assemble, evolve, and optimize building blocks into something that is powerful and capable.
Maire Mahony
Principal Hardware Engineer
Maire is Principal Engineer and Technical Lead for Storage at Google. During her tenure at Google, Maire has participated in the development of multiple generations of server and storage solutions for Google’s Data Centers and held a position as a board member of the Open Power Foundation. Before joining Google, Maire worked at Sun Microsystems on SPARC and AMD blade server systems.
Uri Frank
Vice President, Engineering
Uri joined Google Cloud in March of 2021 and is now the vice president of engineering and general manager of the chip implementation and innovation (CI2) team. His team is responsible for defining and developing a wide variety of chips for Google's infrastructure, such as specialized machine learning accelerators, video transcoding accelerators, and networking chips.
Before Google Cloud, Uri distinguished himself as a corporate vice president at Intel where he led development for generations of high-performance compute solutions: Core IP to client, datacenter, and IOT businesses. The SoCs he developed for Intel’s personal computing businesses resulted in higher performance, new features, and significant power consumption improvements.
Uri earned his BS and MS degrees in electrical engineering from the Technion – Israel Institute of Technology and holds a number of patents in computer architecture.
Are you surprised to find yourself designing chips at Google?
As the silicon space undergoes transformation from the dramatic slowing of Moore's Law and the democratization of silicon development (open source hardware), Google is in a unique place to become a leader in silicon. Google's scale and expertise in the open source space, along with its full stack ownership, allows the company to innovate and contribute in ways that many others simply cannot. I am excited to be expanding Google's presence in the dynamic field of chip design as my team and I advance custom silicon and chip implementation methodologies for Google and Alphabet.
Want to join the team?
Google continues to build an inclusive workplace where every team member can collaborate and thrive. We believe that diversity of perspectives and ideas leads to better discussions, decisions, and outcomes for everyone. Located in Google hubs around the world, our teams welcome the exciting approaches to problem solving that well-rounded teams create. Our teams are hiring outstanding engineers and professionals who are excited by opportunities to apply their skills to solve technological challenges at Google scale.