What shopping for a ThinkPad taught me about processor architecture

Preface
Recently, I was shopping for a new laptop, a Lenovo T14 or T14s, that had a few different CPU options. There were the classic Intel and AMD models but there was third choice. A model with a Qualcomm CPU. Aside from the sticker price and what I collected from a handful of Reddit posts, I didn’t know what was so different about each CPU. I spent several weeks collecting notes, mulling through technical documents and hunting down blog posts to fill in my knowledge gap. This post is the result of what I learned.
Introduction
Intel, AMD and Qualcomm. What makes their chips so different? To answer that question you need to understand how the framework each chip adopts is different. This framework is otherwise known as the Instruction Set Architecture (ISA) and it determines how tasks are carried out by the processor. Just like ice cream, there are different flavors of ISAs. This means that one ISA has a different approach to handling tasks than another. Intel and AMD both implement the same type of instruction set while Qualcomm’s chips take a different approach. This choice means key advantages and drawbacks, and as consumers it impacts how we use our devices. Attributes like battery life, multi-tasking or software compatibility can be viewed on a spectrum, and this spectrum is what influences our purchasing decisions. To understand how, we need to go a little deeper.
What’s an Instruction Set Architecture (ISA)?
An ISA is a framework that determines how the software can interact with the hardware bits of a CPU. We say a CPU has implemented the ISA when it can understand instructions defined by the ISA. Machine code instructions, data types, processor registers and the interface for managing RAM are all laid out in this framework. It also lays out how the computer connects to the outside world. That’s known as the input/output (IO) model and includes things like your mouse, monitor, keyboard and other computers.
An ISA defines what machine code does, regardless of how a manufacturer like AMD or Intel actually builds their processor. This allows your software (the binary) to run correctly on different implementations of the same ISA but with varying performance. It’s also what allows you to continue running the same software when upgrading from an older processor to a newer processor, preserving compatibility.
ISAs can also be extended. More instructions, more support for larger addresses and more data types are
added while still being able to run code without the extension. One of the best examples of this is the 64-bit
extension of x86 known as amd64 or x86-64. This extension increases the 32-bit registers of x86 to 64-bit
and supports more memory (RAM).
How is one different from another?
There are several reasons why one ISA is different from another but the key difference is in how complex their instructions are. Some have a more complex instruction set, packing more power into each instruction while others have a smaller, more uniform set. This distinction is significant enough that it defines two schools of processor design — Complex Instruction Set Computing (CISC) and Reduced Instruction Set Computing (RISC).
Both CISC and RISC are powerful designs capable of supporting almost any computing task. On one hand is CISC that has many complex instructions to offer lots of flexibility. RISC on the other hand has a smaller set of simple but highly optimized instructions that provides fast execution time. These differences can significantly affect the user experience by influencing the time it takes the computer to execute a task, it’s power consumption, and how much memory it uses.
An example of CISC vs RISC
To compare the CISC approach to the RISC approach let’s look at how each handles the task of multiplying two numbers.
The CISC way
CISC tries to complete the task in as few instructions as possible. To do that, the processors hardware is
designed to complete a multi-step operation inside a single instruction. This means providing an instruction,
let’s call it MULT, that performs 3 operations:
- fetches the required operands
- performs the multiplication in hardware
- stores that result back in memory
As a result, the entire MULT operation can be expressed as a single complex instruction.
MULT val_1, val_2
With CISC’s powerful instructions, the programmer doesn’t need to call separate instructions like LOAD or STORE inside the program. The compiler doesn’t need to do as much work in order to translate a function from a high-level language like C into assembly. Because fewer instructions are needed to perform an operation, CISC programs often have less instructions. The tradeoff is that more complexity is placed into the hardware of the processor to provide these powerful instructions.
The RISC way
RISC wants to complete the task using only simple instructions. This means instructions that can be executed
within one clock cycle. The same MULT operation from above would be broken into 3 separate instructions:
LOAD: fetches each operand from memory to a registerPROD: finds the product of the operands in the registersSTORE: moves the data from the register back to memory
The resulting MULT operation is 4 simple instructions instead of one complex instruction.
LOAD val_1
LOAD val_2
PROD val_1, val_2
STORE val_1
Although there are more instructions to run, each simple instruction is intended to take only one clock cycle. In contrast, a complex instruction may take many clock cycles to complete. This means that in many cases both approaches will take roughly the same amount of time to complete the same operation.
Another key difference is the reliance on the hardware for running instructions. With simple instructions, the processor doesn’t need as many transistors to complete an operation. More general-purpose registers and performance enhancing features can be added in their place to optimize the hardware space. Finally, simple instructions also enable pipelining, an optimization to run multiple instructions concurrently.
Story Time: A brief history of today’s chip makers
We’ve covered some of the basics, now it’s time to look at how these chips evolved over time. Looking back through the history we start to see that it’s not always about having the best approach to chip design. Many other factors like market dominance and consumer needs have picked the winners.
Intel and x86
x86 is a family of CISC processors originally developed by Intel, beginning with the 8086 and 8088 microprocessors. Introduced in the late 1970s, the 8086 established a flexible and extensible design that served as the foundation for future generations of processors. Its successors—including the 80186, 80286, 80386, and 80486—continued this foundation, each ending in “86,” a naming convention that ultimately gave rise to the term x86.
More importantly, the 8086 architecture went on to dominate the personal computer market after its adoption in the IBM PC, shaping software development, operating systems, and processor design for decades. Its commitment to backward compatibility ensured that each new generation preserved the ability to run earlier software, creating a powerful ecosystem that cemented x86 as the industry standard. By the 1990s, x86 had become the defining architecture of personal computing, influencing not only Intel’s designs but also competitors and the broader evolution of microprocessor engineering.
AMD shines with AMD64 (x86-64)
While the original x86 architecture was 16-bit and later expanded to 32-bit (IA-32), the turn of the millennium brought a need for more memory addressing power. Graphical operating systems like Windows, enterprise apps and server workloads started bumping up against the limitation of 32-bit memory addressing. The time had come to extend the original design. Enter AMD.
In the early 2000s, Intel attempted to move away from x86 toward a brand-new 64-bit architecture called IA-64 (Itanium). Itanium was not backwards-compatible with older 32-bit software, requiring developers to adapt to a completely new toolchain (compilers, debuggers, etc.). AMD saw an opportunity and developed AMD64 (later branded as x86-64). AMD64 extended the existing x86 instruction set to 64 bits while maintaining native, high-performance support for 32-bit code. Adopting AMD64 meant that software developers didn’t have to painstakingly adapt to a new standard. Because it was fully backwards compatible, they could move to 64-bit performance without leaving their existing codebases behind. AMD’s first AMD64 processors, the Opteron and Athlon 64, were released in 2003 and were met with a lot of success from enterprise and personal consumers. In fact, the extension was so successful and practical for developers that Intel eventually adopted the standard themselves.
Note:
amd64orx86-64are the technical names for the 64-bit x86 instruction set used by both Intel and AMD processors. You’ll commonly see it in the tail end of a file name likeinstaller_x86_64.exeor when pulling an x86 based Docker Image.
ARM and the alternate universe
ARM is a family of RISC architectures developed by the British company Acorn Computers in the mid-1980s. They had been designed to prioritize power efficiency and simplicity over the complexity of CISC-based designs. Their approach caught the attention of Apple, who partnered with Acorn Computers in the early 1990s to spin off Advanced RISC Machines Ltd., now known as Arm Holdings.
The “architect” business model
While x86 is dominated by a few giants building their own chips, ARM changed the industry by becoming a “high-tech architect.” Arm Holdings does not manufacture physical chips. Instead, they license their blueprints (the instruction set and core designs) to companies like Apple, Qualcomm, and Samsung.
These partners pay royalties to integrate ARM’s designs into their own custom System on a Chip (SoC). This cooperative model created an ecosystem where hundreds of manufacturers could innovate simultaneously on a single standard. Today, you can find almost all smartphone, increasingly more laptops and servers using these ARM based blueprints.
The gap between x86 and ARM
x86 and ARM are fundamentally different, as we have learned. They are not compatible. This means your favorite software that runs on an x86 based computer must be recompiled for ARM in order to function, and vice versa.
The rise in popularity of ARM, often seen branded as arm64 or aarch64, has persuaded some developers to make the leap. This means recompiling or refactoring many areas of the code base in order to meet compatibility. It might be a time consuming process, and some dependencies may be missing but it allows teams to break away from a single chip maker. This could prove to be a huge advantage as time moves forward and support for ARM based computers catches x86.
Note:
arm64oraarch64are the technical names for the 64-bit ARM instruction set. It’s common to see when downloading software like the Linux ISO, or pulling an ARM based Docker image.
Closing thoughts
Now you have everything you need to make a slightly more informed decision about your next computer. These topics can become very dense. Diving deep into the microprocessor, one of the most advanced devices we’ve ever built, requires a base level understanding of a computer to get a grasp on the more advanced bits we talked about here. I only skimmed the surface here but if you enjoyed some of what you read, I encourage you to check out any of the resources I’ve tagged below.
References
Wikipedia. (n.d.). Instruction set architecture. https://en.wikipedia.org/wiki/Instruction_set_architecture
TechTarget. (n.d.). x86-64. https://www.techtarget.com/whatis/definition/x86-64
Lenovo. (n.d.). What is instruction set architecture (ISA)? https://www.lenovo.com/ca/en/glossary/instruction-set-architecture/
Roberts, E. (n.d.). RISC vs. CISC. Stanford University. https://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/
Wikipedia. (n.d.). x86. https://en.wikipedia.org/wiki/X86
Intel. (n.d.). The Intel® 8086 and the IBM PC. https://www.intel.com/content/www/us/en/history/virtual-vault/articles/the-8086-and-the-ibm-pc.html
Bryant, R. E., & O’Hallaron, D. R. (2005, September 9). x86-64 machine-level programming [Course handout]. Carnegie Mellon University. https://www.cs.cmu.edu/~fp/courses/15213-s07/misc/asm64-handout.pdf
Advanced Micro Devices. (n.d.). x86-64 technology white paper. https://classes.engineering.wustl.edu/cse362/images/1/16/X86-64_wp.pdf
ARM. (n.d.). The official history of ARM. ARM Newsroom. https://newsroom.arm.com/blog/arm-official-history
Strategyzer. (n.d.). ARM business model. https://www.strategyzer.com/library/arm-business-model
Esper. (n.d.). ARM vs. x86: What’s the difference? https://www.esper.io/blog/arm-vs-x86-whats-the-difference
Scaleway. (n.d.). Understanding the differences between ARM and x86 instances. https://www.scaleway.com/en/docs/instances/reference-content/understanding-differences-x86-arm/