The Origins of Computers and Programming

#ProgrammingThu Oct 10 2024

When was the very first computer invented? And where did programming even come from? To really understand the roots of computers and programming, we’re traveling all the way back to 2500 BCE, when people first started using tools to calculate. Let’s rewind and see how we got from ancient counting tools to the powerful tech we rely on today. Ready? Let’s go!

For this article, I referred to the Computer History Museum and Crash Course: Computer Science.

The Abacus

The abacus, invented in Mesopotamia around 2500 BCE, was one of the earliest devices used for calculation. It worked similarly to how our computers store numbers on a hard drive, recording the current state of calculations. The abacus came about because as societies grew, it became difficult for individuals to remember or handle the growing amounts of people and resources.

Of course, an abacus isn't a computer! So when did the term "computer" first appear?

In 1613, Richard Braithwait used the word "computer" in his book, not to describe a machine, but as a job title. At that time, a "computer" was a person who performed calculations with the help of tools. This job title continued until the late 1800s when the term started to refer to machines.

The Step Reckoner

In 1694, German mathematician Gottfried Leibniz developed a device called the Step Reckoner. It not only tracked distances, like an odometer, but also performed multiplication and division. The Step Reckoner could automate these calculations.

It was the first device capable of performing all four arithmetic operations. This revolutionary calculator was used for about 300 years, though it could take hours or even days to obtain a single result. Additionally, the machine was so expensive that most people couldn’t afford it.

Pre-Computed Tables

A solution to this problem was to use pre-computed tables, which allowed people to look up values instead of spending days on calculations with the Step Reckoner.

Difference Engine & Analytical Engine

Charles Babbage proposed a new mechanical device, the Difference Engine, designed to handle polynomial calculations. Polynomials are useful for expressing relationships between multiple variables and measuring algebraic or trigonometric functions. Babbage also envisioned another machine, the Analytical Engine, which was a more general-purpose computer. It was capable of:

  • Accepting input data
  • Operating sequentially
  • Storing information and even featured a primitive printer

Though neither the Difference Engine nor the Analytical Engine was completed in Babbage's time, these ideas laid the foundation for computer programming. Babbage's concepts inspired the first generation of computer scientists, which is why he is often called the "father of computing."

The Need for Computers in Census Data

By the 1880s, the U.S. population had surged, and the Constitution required a census every ten years. The Census Bureau turned to Herman Hollerith, who invented the tabulating machine. His electro-mechanical machine used traditional counting systems to store numbers and combined them with electric components, similar to the Step Reckoner.

Hollerith’s machine used punched cards with a grid of holes to represent data. When these cards were inserted into the machine, pins would pass through the holes, allowing for calculations such as summation. This system helped complete the U.S. census in two and a half years, saving millions of dollars.

Recognizing the value of computers, businesses began to demand them. In response, Hollerith founded The Tabulating Machine Company, which eventually merged with others in 1924 to form the International Business Machines Corporation, or IBM.

The Dawn of Electronic Computing: Harvard Mark 1

As populations and global trade grew, data needs skyrocketed, turning mechanical computers into enormous, costly machines prone to errors. One of the largest electro-mechanical computers was the Harvard Mark 1, developed by IBM in 1944. This massive machine contained 765,000 parts, three million connections, and 500 miles of wiring. However, it was slow, suffered from relay wear, and its warmth attracted bugs—leading Grace Hopper to famously say that computer problems were due to "bugs."

Transitioning from Mechanical to Electronic Computing

To advance computing, a faster and more reliable alternative was needed. The answer was already in development: in 1904, British physicist John Ambrose Fleming created a new electronic component—the thermionic valve, or vacuum tube.

Inspired by this vacuum tube, American inventor Lee de Forest added a third electrode to control electric current flow, which created a switch similar to relays. Vacuum tubes, without any moving parts, could switch thousands of times per second, making them ideal for radios and other devices.

However, vacuum tubes had their downsides: they were fragile, prone to failure, and initially costly. By the 1940s, though, their reliability had improved enough that governments could use them in computers, marking the transition to fully electronic computing.

The Colossus

Developed by British engineer Tommy Flowers, the Colossus Mk 1, completed in December 1943, was the first large-scale electronic computer using vacuum tubes. This machine was employed to decode Nazi communications. Meanwhile, Alan Turing had earlier developed a similar device called the Bombe to break the Enigma code.

The Colossus, with 1,600 vacuum tubes, was revolutionary. In total, ten were built for codebreaking, making the Colossus the first programmable electronic computer.

ENIAC

Programming on early computers like the 1946 ENIAC (Electronic Numerical Integrator and Computer) involved plugging in hundreds of cables. Developed at the University of Pennsylvania by John Mauchly and J. Presper Eckert, ENIAC was the first fully programmable electronic computer.

ENIAC could perform 5,000 additions and subtractions per second, far faster than its predecessors. However, by the 1950s, even vacuum-tube computers had reached their limits, prompting the search for more efficient switches to improve speed, reliability, cost, and size.

The Transistor!

In 1947, scientists John Bardeen, Walter Brattin, and William Shockley at Bell Labs invented the transistor, launching a new era of computing.

Transistors, like vacuum tubes and relays, function as switches, which can be turned on and off by electric current. Using semiconductors, transistors allowed for rapid switching, up to thousands of times per second, and their small size made computers cheaper and more compact.

IBM soon adopted transistors for all its computing products, ultimately making computers small and affordable enough for offices and eventually homes.

Conclusion

Relay - Vacuum Tube - Transistor

I’ve learned that these three steps brought computers from massive machines to the ones we can now carry in our bags! Discovering the origins and stages of this process has been fascinating. I sometimes wonder what life would have been like in the 1880s—perhaps I'd be connecting cables and catching literal "bugs" in machines. As someone born in the ‘90s, I feel lucky to experience the convenience of modern technology and the legacy of all the pioneers.