Friday 28 March 2014

Sketchy Fact #33: Left-Handed Lunar Exploring

Approximately 62% of autistic children are left-handed, compared with 37% of non-autistic children. Buzz Aldrin is also left-handed. This in no way implies he is autistic, but it probably made driving the lunar buggy a bit annoying.



Wednesday 26 March 2014

Who Built the First Computer? A Hypothetical Argument

Last week we published an article about Moore’s Law and the rapid development of computing technology. In that article we said that a singing birthday card has more computing power than the Allied Forces of WWII. Since there is the potential to debate that comparison, we thought we would use this week's article to clear up what we meant. It all boils down to one question: When was the first computer invented? Whereas one person could argue that it was unfair to compare the card to the military because the latter had no transistors and therefore no computing power at all, we at Sketchy Science contended that computing technology goes back a lot further than that.

To uncover the ancestor of the modern computer we have to go back in time. Conveniently enough, our first stop is around the time of the events that led to last week’s contentious comparison. As we saw in our discussion of Moore’s Law, modern computers are tied to transistors. The first transistor was built in 1947, only 2 years after the end of WWII. 


Transistors were a huge leap forward in computing technology. They allowed for the unprecedented control of electrons and formed the foundation for all 21st century electronics (so far). If your definition of a computer includes the words “electronic” and “transistor,” our argument is at an end. But to stop here would be to ignore an influential 19th century philosopher, mathematician, and engineer with the delightfully British name of Charles Babbage.


The 1800’s were a tough time to be a mathematician. Science had progressed far enough to begin dealing with some seriously complex equations, but not so far as to have the technology to compute the answers automatically. The result was long hours spent hunched over a desk with quill in hand, writing out the calculations that could ultimately prove your ideas. A man after my own heart, Charles Babbage had a touch of laziness in him and thought there must be a better way. In one of the great feats of laziness-inspired motivation (he wasn’t really lazy, he just valued efficiency and accuracy) Babbage drafted plans for a mechanical device that could compute complex polynomials without the need for a person to work out every step.


In 1822, Babbage presented a paper to the Royal Astronomical Society called Note on the application of machinery to the computation of astronomical and mathematical tables.” In it he laid out how his machine would work. The Difference Engine, as it was called, would have several columns numbered 1 to N (N is just a stand-in code for the highest number of columns needed). Each column would contain numbered wheels. In theory, after the alignment and timing were worked out based on mathematic principles, you could set the first column to the value of any equation at a known point and run the machine through a series of calculations to determine the value of that same equation after any given number of changes.


The big wigs at the Royal Society were obviously impressed by all this. In 1823 they gave Babbage £1700 (the equivalent of £190,000 or $317,000 US today) to go build his machine. Unfortunately for them, for Babbage, and for science things got a little out of hand. Instead of building a machine that could perform basic polynomial calculations (equations with multiple terms), Babbage took the opportunity to try and build a machine capable of advanced analytics. In the end, and after a total investment of £17,000 ($3.17 million US in today's money), the project was scrapped (Campbell-Kelly, 2004).


The first working difference engine was eventually built in 1855. Babbage’s machine, as conceived was finally built (more or less just to see if it would actually work) in 1991. In the event, the machine works flawlessly to this day and now resides at the Computer History Museum in Mountain View, California, USA.


If you want to get really semantic we could go back even further and say the birth of computing is tied to abacuses or some comparably simple device; but in terms of raw automatic computing power, Babbage’s difference engine is what got the ball rolling. Famously, the Allied Forces in WWII were able to crack Nazi codes using machines that worked on similar principles. So it appears that our initial comparison holds some water after all. This isn’t about being right though, it’s about the fun of finding out the truth.





Campbell-Kelly, Martin (2004). Computer: A History of the Information Machine 2nd ed. Boulder, Co: Westview Press. ISBN 978-0-8133-4264-1.

Friday 21 March 2014

Sketchy Fact #32: A Not-Very Sticky Situation

The glue for post-it notes was invented by accident in 1968 by 3M engineer Spencer Silver. He was trying to invent a super-strong adhesive, but came up way, way short. 



Wednesday 19 March 2014

Moore's Law: The Reason Why Your Laptop is Already Obsolete

No matter what kind electronic device you're reading this on, a better version of it exists or is well on its way. If that makes you angry, blame a physicist.

Physics is tricky business. Other areas of science (chemistry and biology for example) can get fairy complicated but there is a certain elegant logic behind them that allows the average person to grasp technical concepts after they put in a little effort. Physics is different. Try as you might to apply overarching logic to the big problems in physics, you will come up short. Even the greatest minds that history has yet produced are unable to explain why you need separate sets of “laws” to describe things that are normal sized or large (baseballs, planets, stars, etc.) versus things that are very tiny to incomprehensibly small in size (electrons, quarks, neutrinos, etc.)


But as dense as physics can get, it isn’t something that we can afford to get frustrated with and give up on. Like it or not, we have built a world in which the seemingly trivial problems of physics (ex. you can either know the location or the speed of an electron, but you can’t know both) actually matter. Case in point is the concept known as Moore’s Law.


Moore’s Law isn’t actually a law, it’s an observation first made by Intel co-founder Gordon Moore in 1965. Put very simply, it is the notion that computing power (the ability of your laptop to do things for you) doubles every 18 months. Put somewhat more technically, it states that the number of transistors on a one inch piece of silicon doubles every 18 months. These descriptions are seemingly similar, but where the former allows for some amusing comparisons, the latter will make your head spin with it’s details.

Let’s start with a fun comparison. Since computing power is said to double every 18 months , it is important to know what that means. Doubling is an exponential rate of growth. In other words, if a laptop in a given year is said to have Sketchy Science Power Rating (SSPR) of 1, the next generation would have an SSPR of 2 (doubled), the next would be 4, then 8, then 16, then 32. It isn’t hard to see that things will quickly get out of hand. In only 15 generations, our computer will be humming along with an SSPR of 32,768. Translated into real examples, Moore’s Law explains why the chip in your birthday card that sings you a song has more computing power than the Allied Forces in WWII; or why your cell phone is more powerful than NASA was at the time of the first moon landing.


To explain how this is possible, we need to understand a little about how electronics work. We are all familiar with the concept of microchips (or microprocessors, if you prefer). They are the brains of our technology. Many things help determine the speed of a microprocessor but one of the most important is the number of transistors on it. Transistors are like switches that control the flow of electrons, and controlling electrons is how you make them do work for you, hence the label “electronics.”


Transistors are the modern substitute for vacuum tubes (those big lightbulb-like things you see on contraptions in mad scientist movies from the 1940’s). The first transistor was built in 1947 and was 1.3 cm (half an inch) long. Since then, to put things mildly, we have gotten really, really, ridiculously good at making them. Modern transistors are about 40 nanometers in size. For perspective, a human hair is 100,000 nanometers thick. As you can imagine, we can now cram a lot of transistors onto a microchip. In 1965 when Gordon Moore made his observation, he used it to predict that in ten years the number of components on a circuit (read: transistors on a chip) would grow from 50 in 1965 to around 65,000 in 1975. He was correct. The trend has since continued and Intel’s current i7 microprocessor now contains 731 million transistors and it’s Xeon processor had 1.9 billion. No wonder new laptops are obsolete every few months.


This is all very exciting, but why exactly should you care? Well, the answer lies in the fact that eventually Moore’s Law will cease to be true. The problem with building things on the scale of nanometers is that you become subject to the laws of quantum physics. When you can’t know both the location and speed of an electron, it’s becomes very hard to control them. When transistors get too small electrons can start to tunnel through seemingly solid materials and cause chips to malfunction.


World renowned physicist and ballsy future predictor, Dr. Michio Kaku has argued that Moore’s Law will reach an end sometime in the next 10 years. To be fair, the great minds of physics also predicted an end to Moore’s Law in the 60’s and the 80’s, but their predictions were based more on humans running out of ideas about how to organize things on a chip and less on the actual laws of physics.

When Moore’s Law ends we will need to transition to new forms of technology if we want things to continue to gain power. Some have predicted computers based on the structure of DNA, or ones that actually operate on the quantum level (quantum computers) that may only be made up of a few atoms. Regardless of what the new technology looks like, the transition will likely be a bumpy one. The global economy will suffer and the price of electronics might take a roller coaster ride for a few years. It’s a challenge that the physicists are already working on. The least we can do is try not to gloss over or call them nerds when they go from sounding almost understandable into completely insane.


Good luck physicists. Now where is my iPod?

Friday 14 March 2014

Sketchy Fact #31: Iron You Glad You Eat Breakfast?

Cereal contains actual, honest-to-goodness iron. If you smash a bit of it into crumbs, mix it with water, and use a moderately strong magnet you can see the metal in your breakfast for yourself.


Tuesday 11 March 2014

A Methane Mix-Up: Losing Yourself in The Bermuda Triangle

Special thanks to Veronica in Ottawa, Ontario, Canada for this week’s topic. Veronica was kind enough to send us this link and on the ball enough to ask “Seriously?”

To understand the Bermuda Triangle it helps to first get our bearings on this big old planet of ours. If you connect the dots formed by the country of Bermuda; Miami, Florida; and San Juan, Puerto Rico on a map you will have created a triangle covering about 1.3 million square kilometers (500,000 square miles) of the North Atlantic Ocean. Some people argue that based on historical data of plane and ship wrecks, this is most dangerous area of all the world’s oceans.  


Before we dive into possible explanations for the Bermuda Triangle, it is pertinent to stop and make the most popular demand in all of science: show me the evidence! Aficionados of the triangle claim that in the past 100 years, about 100 ships/planes have vanished within its borders, claiming 1,000 lives. That right there should set of your suspicion-o-meter. The numbers are too neat. Nature doesn’t work with many zeros.


When you have a question about shipwrecks, you are best off asking the folks at Lloyd’s of London. As the world’s top insurer of all things nautical, Lloyds has a vested interest (in the order of billions of dollars) in knowing where the dangerous parts of the ocean are. They recorded 428 sunken vessels between 1955 and 1975 and when asked about rates of disappearance in the Bermuda Triangle they have been quoted as saying “It doesn’t exist.” Yes indeed, insurance rates for ships in the Bermuda Triangle are no higher than for anywhere else in the ocean.


That kind of takes the wind out of our sails, right out of the gate. But there is hope! Lloyd’s of London deals mostly in massive cargo ships and not so much in personal air and watercraft, so let’s just pretend we didn’t read that last paragraph… And let’s ignore the US coast guard when they reassure us that the rate of disappearances in the so-called Bermuda Triangle is no higher than anywhere else. Instead we can cling to the alleged reports from the National Transportation and Safety Board that indicate that only 10 private planes have disappeared off the coast of New England in the last 50 years, while 30 have gone off the radar in the triangle.


Okay, so what might be causing this string of definitely not-made-up disappearances? As you would expect for this sort of things, there are more theories than there are confirmed missing persons. The most note-worthy are aliens, wormholes to other dimensions, and government bomb testing. But this is a science blog, so let’s set aside the nonsense right now.


One of the more interesting theories still in the far-fetched category involves something called electric fog. Bruce Gernon and Rob MacGregor, two experienced pilots, have written a book called The Fog: A Never Before Published Theory of the Bermuda Triangle Phenomenon. In it, they describe eerily round clouds they have each encountered on flights in the area. Gernon, on one flight, attempted to fly through a tunnel in one of these clouds only to have his instruments malfunction and observe a strange electric disturbance on the walls of this plane. He also reports emerging from the fog to find himself flying over Miami after flying for 47 minutes. His route that day was scheduled to take 75 minutes.


Spooky time-travelling fog aside, the more realistic explanations for the Bermuda Triangle (which is totally a real thing, in case you forgot) centre around weather, ocean topography, and human error. The east coast of North America is not only a magnet for hurricanes, it has some of the deepest ocean trenches on the planet as well as a large number of amateur pilots and seamen. Those are all components in a bad equation.


But what about the link that started off this article? What about these supposed Methane Vents? Well, it turns out that the internet may be on to something. The area off the east coast of the United States contains methane deposits in enough quantity to meet the country’s current natural gas output for 16,000 years. These deposits form as dead, decomposing plants and animals at the bottom of the very deep ocean release gases, namely methane. Since the temperature is so low and the pressure is so high, these gases get trapped in the sediment.


Researchers Elchin Bagirov and Ian Lerche have studied similar “Methane Hydrates” (methane trapped in ice instead of sediment) in the Caspian Sea and deemed them to be a serious hazard to oil drilling in the area, because of their instability. When these methane deposits rupture they can release gas so violently that the entire water column above them becomes less dense. Theoretically a ship directly above a methane eruption could be sunk. There is no record of this ever actually happening, but if your ship goes down in a vortex of bubbling methane you aren’t likely to survive to tell the people who record such things.


It is kind of a one in a million shot. The equivalent of lightning from below. And it still doesn’t explain all the alleged plane disappearances, but as far as cool science goes, it is high on the list of possible explanations. And we really do need explanations because, as we have seen, the Bermuda Triangle is definitely a totally real thing that actually exists.

Thursday 6 March 2014

Sketchy Fact #30: A Fortunate Mess

Penicillin was discovered by Sir Alexander Fleming in 1928 because he forgot to clean his lab before going on vacation. Fleming noticed that one of his petri dishes that had been contaminated by bacteria had a bacteria-free spot surrounding a piece of mold. His slobbish tendencies ended up saving millions of lives. 



Wednesday 5 March 2014

Working Out your Willpower: The Long-Suffering Science of Lent and Self-Deprivation

In quite a few of our previous adventures into the realm of science we have touched on the fact that modern research often delegitimizes religious ideas (read Adam and Eve: The Snake-Free Version That Actually Happened). We have also used science to offer explanations for misinterpretations of real things (ex. Supernovae: The Possibly True Story of the Christmas Star). However, one thing we haven’t explored is what science can learn from religious practices. Today is the first of 40 days of deprivation that several of the world’s major religions call lent, so what better time to start? 






As far as religious events go, lent may have the fewest redeeming qualities. You don’t get presents, there is no chocolate or fanciful characters that monitor your behaviour (unless you count God), and you don’t even get a day off work/school until it is over. With all of these elements working against it, why should any of us bother to give lent the time of day? As it turns out, the practice of self-denial may have some serious benefits to bestow on our weary brains. 






Dr. Roy Baumeister is a psychologist at Florida State University who has devoted his research to the study of willpower. He has even gone so far as to write book entitled just that. Through his research, Baumeister has uncovered evidence that suggests willpower is more like a muscle than a character trait. He split people into two groups: one that practiced willpower for 2 weeks by following random rules at home (no swearing, use your non-dominant hand to open doors, etc.) and another that just lived their lives as normal. At the beginning and end of the experiment, participants in both groups came into the Baumeister lab to undertake a number of uncomfortable tasks like holding their hands in ice water or squeezing an exercise ball. Results showed that people in the “practice willpower” condition performed better on the uncomfortable activities than the other “control” subjects. So, next time you hear someone say “I couldn’t possibly resist ______” make sure you call them out for being lazy. 




Though the research on willpower does not yet identify structural changes to the brain that come with exercise, studies of people who meditate (a form of self-control) regularly have shown via fMRI scans that connections in the prefrontal cortex become denser and more active. The prefrontal cortex is the area of your brain that allows you to make reasoned, rational decisions and solve problems; so if you want to bulk up any neural-area, it is certainly one to consider. 




Further supporting the willpower as a muscle hypothesis are studies done on the fatiguing effects of willpower by a team of scientists (including none other than Dr. Roy “Willpower” Baumeister). His research into the physiological effects of exerting will power have shown that people who are required to participate in activities that require restraint (Stroop Test, thought suppression, emotional control) actually use up their body’s reserves of the brain-fueling compound called glucose (commonly known as sugar)(Gailliot, Baumeister, DeWall, et al., 2007). Yet another study gave two groups of people either lemonade with sugar or lemonade with artificial sweetener and found that participants who received the real stuff (i.e. glucose) were better able to ignore distractions when performing complex mental tasks (Masicampo & Baumeister, 2008). 



Aside from the benefits of sugar, these are not findings that we are very likely to embrace in our current “give-it-to-me-now” culture. Psychologists call your inner impulse-freak your id, and in 2014 we pretty much let it run the show. However, a number of society’s greatest minds have instinctively understood that mental power is much the same as physical power. Famously, Steve Jobs wore the same clothes (black turtleneck, jeans, and New Balance running shoes) almost every day so he didn’t have to waste mental energy organizing outfits. Einstein is reported to have owned a closet full of the same grey-suit, presumably for the same reason. Apparently Barack Obama does the same thing




Clearly there is something to this whole “exercising our minds” thing; and what better way to get started than to undertake some Lenten deprivation? Try to pick something in your life that you will notice living without and go for it. The tougher the challenge, the stronger you will be at the end. Be careful though, some research into deprivation has shown that there is such thing as too much of a good thing. Otherwise sane people in situations of total sensory deprivation (sitting in a completely silent, totally darkened room) can start to have hallucinations in as little as 15 minutes (Mason & Brady, 2009). Presumably, those were ordinary id-controlled people though, so by the end of lent you and your brawny willpower may be able to hold out for at least 20.