The Pencil Curve

(Originally published on September 26, 2010)

What is the shape of the curve traced by one tip of a pencil as you roll it up a cylinder (the pencil being tangent to the cylinder at all times)? The pencil starts just touching the cylinder. As the pencil moves closer to the cylinder, the tip first moves away, then quickly moves back, eventually to stop as the pencil becomes vertical.

Below is the diagram of the pencil’s start point, the end point, and an arbitrary point.

The pencil rolling on a cylinder. The tip tracing a curve is marked.

The pencil rolling on a cylinder. The tip tracing a curve is marked.

It’ll be easier to parameterize the curve: determine the coordinates of each point as a function of the distance of the bottom tip of the pencil to the bottom of the cylinder (where it is tangent with the table). Initially the pencil’s tip is just touching the cylinder and the distance \(t\) is equal to \(l\), the length of the pencil. At the end, as the pencil is vertical, \(t=r\) the radius of the cylinder.

We have, from the arbitrary point, \[\begin{aligned} \frac{y}{x+t}&=tan 2\alpha = \frac{2tan \alpha}{1-tan^2\alpha} = \frac{2r/t}{1-r^2/t^2} = \frac{2rt}{t^2-r^2}\\ \frac{x+t}{l}&=\cos 2\alpha = 2\cos^2\alpha-1 = \frac{2t^2}{r^2+t^2}-1 = \frac{t^2-r^2}{t^2+r^2} \end{aligned}\] Therefore, \[\begin{aligned} x &= \frac{l(t^2-r^2)}{t^2+r^2}-t\\ y &= \frac{2rt}{t^2-r^2}\cdot (x+t) = \frac{2rt}{t^2-r^2}\cdot\frac{l(t^2-r^2)}{t^2+r^2} = \frac{2rtl}{t^2+r^2} \end{aligned}\] We can plot the curve as a function of \(t\):
Plotted pencil curve.

Plotted pencil curve.

An Interchange

 (Originally published on September 22, 2010) 

Here is my version of a stack interchange — a system of two highways intersecting such that cars coming from any direction can either go straight, turn right onto the intersecting highway, or turn left in the opposite direction of the intersecting highway (I didn’t allow U-turns so as not to complicate things too much).

My highway interchange: the red arrows show the paths that drivers going in a particular direction could take.  Note that at most two lanes intersect at a point which makes it conceivable to build the interchange with two levels only; in the di…

My highway interchange: the red arrows show the paths that drivers going in a particular direction could take.  Note that at most two lanes intersect at a point which makes it conceivable to build the interchange with two levels only; in the diagram the broken lane is below the other

And here is a slightly different version that doesn’t suffer from the problem of a center being too dense — if you look very carefully, you’ll see that in the version above, the centers of each of the arcs would meet unless the drop is not uniform.

Addressing the problem of a dense center

Addressing the problem of a dense center

It seems to me that it can be built with two levels only — although something makes me think that it’s not that viable to build (since existing stack interchanges require four or more levels), plus to ensure a practical speed it would either have to have a rather large surface area, or only allow passenger cars driving with reduced speed. Here it is in three dimensions:

My stack interchange in three dimensions

My stack interchange in three dimensions

My stack interchange, zoomed in.  The lane splits into three lanes and each separate lane takes you into one of the three directions

My stack interchange, zoomed in.  The lane splits into three lanes and each separate lane takes you into one of the three directions

 

You can download my Google SketchUp file here .

The Collider

(Originally published on February 23rd, 2010) 

On February 20th, the Large Hadron Collider ramped up its output to three-and-a-half trillion electron-volts. That February 20th–despite what the skeptics had presumed–was not the day the world ended. No, the end of the world has not dawned upon us yet. But now we know that it will–and we know that it will come soon.

Skeptics and religious zealots aside, scientifically, February 20th was actually supposed to be rather uneventful. At three-and-a-half, the Collider operated at half its target energy, and the Higgs boson was unlikely to rear its coveted head. At seven–it was theorized–it should, but the Collider wasn’t ready for seven; that wouldn’t be happening until 2012. Unsurprisingly then, on February 21st, in the absence of any sensation to report, the headlines of some European newspapers (and Page 2 blurbs of others) focused on the questionable value of this very expensive scientific experiment–the most expensive experiment in human history, in fact–calling it “the World’s Greatest Waste of Money”.

The Collider’s computers pumped experimental data at a staggering rate of twenty gigabytes per day. CERN was kind enough to make the data available to the scientific community (or rather, to the tiny fraction of the community capable of consuming data that quickly) but there was a widespread understanding that results–if any–would take weeks to hunt down in the jungle of zeroes and ones.

Consequently, the revelation that came on February 25th startled absolutely everyone. All six detectors embedded in the accelerator’s hull reported several major anomalies. It seemed, based on CERN’s back-of-the-envelope analysis, that the space throughout the accelerator manifested pockets of non-relativistic properties. Particles twice as heavy as electrons have been detected. The electroweak and strong forces seemed to switch places. The events were short-lived and highly localized yet nobody knew what to make out of it.

The prevailing mood at CERN was one of bewilderment although there were obviously some who were elated–hoping for “easy” Nobel prizes or dreaming of proving the likes of Steven Weinberg wrong–and many more who were highly critical. Following a policy that could only come out of an institute desperate for wonders, the management board at CERN allowed an occasional anomaly so long as they were within the prevailing safety guardrails; the experiment was allowed to continue.

But the event that–in retrospect–was far greater in magnitude, occurred that day not in Europe, but at the Fermilab particle accelerator in Illinois. One of the particle colliders–similar in design to the Large Hadron Collider but capable of producing only much less spectacular collisions–reported spontaneous particle activity. Somehow particle collisions were being observed despite the fact that the accelerator had not been launched. Similar events at various accelerators throughout the globe were reported shortly afterwards, roughly in decreasing order of the accelerator’s sizes.

What was going on? One theory put forth somewhat hastily was that due to some unknown “particle tunelling” phenomenon all the major accelerators developed a kind of coupling, wherefore an event in one accelerator triggered a respective reaction in all the others. The theory likened this effect to that of quantum tunelling (a phenomenon known to the wide quasi-scientific New Scientist-and-the-like community as being the one making teleportation plausible) but on a large scale. The theory gathered widespread adoption despite being entirely unsubstantiated; it did not help explain how such a mechanism was possible, how–if at all–the Large Hadron Collider triggered it, and–most importantly–what the implications of the emergence of such a tunnel were.

The events of February 26th helped answer, at least partially, the latter questions. Concerned about possibly having caused an event that they didn’t fully understand, the scientists at CERN decided to turn off the Collider. A “controlled shutdown” was ordered: the energy would be slowly reduced to zero to allow teams all around the world to observe how the decrease in the Collider’s energy affected the coupled accelerators. The hope was that, if the Collider was the origin of the phenomenon, a shutdown would reduce the intensity of the individual tunnels. Most events in physics, after all, are reversible.

As the Collider’s power approached 95%, the Fermilab team (and then all the others) observed miniature black holes emerge at the sites of the anomalies. As an increasing number of short-lived, microscopic black holes popped up and as their size and life began to increase, it became clear to all that further power reductions would not be prudent. Evidently, following another theory put forth a few days later, the particle tunelling effect was not reversible; the only way to eliminate the tunnel is to let Nature create a black hole large enough to collapse the endpoints of a tunnel into one point. As there were by now dozens of tunnels between most major particle accelerators throughout the world, to stop the tunnel would have a disastrous consequence of witnessing the creation of a black hole large enough to consume all of Earth.

Here we are, barely seven days after the Large Hadron Collider started smashing electrons with never-before seen energies, equipped with the damned knowledge that the Collider is a ticking time-bomb and that the days of our planet are numbered. How much we have, nobody knows for sure. It all depends on how much longer we can keep the Collider running.

The world is watching the Collider–the tool of our demise–with suspended breath. If it breaks down or suddenly drops its power output, we are all going to vanish spectacularly, consumed by a black hole we will have accidentally created in the name of the elusive, impalpable knowledge. As anything man-made, it’s bound to break down. It’s just a matter of time.

February 20th was the day mankind doomed itself.

A New Color Picker

(Originally published in 2010)

I bet you've seen color pickers before. They are neat UI elements that allow you to select a particular color that you may have in mind. They do that by organizing the entire color space in a way that's easily browsed. Usually, pickers show you a 2D panel that displays all colors along two of the dimensions, and a slider for the third dimension; or they only show you a small-ish subset of all the colors.

I’m fascinated with color, especially when there’s math or technology involved. And so I set out to build a picker that displays all the colors, yet requires only a single two-dimensional surface.

To learn all the details of how I generated this new color picker, see this post. In short, however, the idea is this: we want to map a 3D space (0..255, 0..255, 0..255) into a 2D space (0..4095, 0..4095) in a smooth way, so we'll use space-filling curves. "Walking" the R, G and B dimensions, however, gives a pretty unsmooth picker, so instead I "walked" the color intensity, and for each intensity, "walked" over all possible colors of that intensity. I then picked the order in which the colors would appear by sorting by R, G and then B.

The resulting color picker has the interesting property that it displays all possible colors (up to the image's resolution) in a single image:

A New Color Picker

A New Color Picker

A Homebrew Computer Alarm

(Originally published on January 7th, 2010)

I wanted to wake up to NPR. There’s a good alarm application for the Mac called Alarm Clock which allowed me to play an arbitrary iTunes playlist on schedule (with bells and whistles such as gradually increasing the volume), but the free version couldn’t deal with playing audio streams (such as, in my case, wnyc.org).

No problem — I used cron as well as OS X’s built-in wake-on-schedule functionality.

First, in System Preferences > Energy Saver, I set a schedule to wake up the computer on weekdays at 6am. Then I edited the crontab: in Terminal I typed

crontab -e

and typed in the editor

1 6 * * 1-5 osascript /Users/strozek/wnyc.applescript

The above tells OS X to run the command osascript at 6:01am Monday through Friday. The script I pass to osascript is the following script:

set volume 2
tell application “Safari”
activate
open location “http://wnyc.org/flashplayer/player.html#/playStream/fm939″
end tell
delay 20
set volume 2
delay 20
set volume 2.75
delay 20
set volume 3.25
delay 20
set volume 3.75
delay 20
set volume 4.25
delay 20
set volume 5

And voilà! I just need to remember to keep the computer plugged in at night and not close the lid.

Being Blind, For a Weekend

(Originally published on December 9, 2009)

For this past weekend’s miniproject, I decided to block all light from entering my eyes. I wanted to experience the world around me with one fewer sense, even if it was just for a few days. In addition to this, I wanted to see if I can help my eyesight get better. Apparently there has been some success in curing an eye condition that I have (called amblyopia) by blinding oneself for a short period of time. In my case specifically, the connection between my left eye and my brain never fully developed because when I was little, my left eye wasn’t as good as my right, and so i ended up relying heavily on my right eye to see. Blinding yourself fully for about one week, the theory goes, may “reboot” your brain and allow the weak neural connection to re-form. I may be able to see small and temporary improvement after just one weekend.

I went to sleep on Friday night wearing a special mask I made that doesn’t let any light through. I would wake up already not being able to see, which should be a good start of this experiment.

The first couple of hours have been interesting, to say the least. Things in general take much more time to do. Moving around the house is not that difficult, but I haven’t built up a mental model of the house (because I had been relying on my sense of sight all the time) so I would sometimes end up getting lost. It takes a while to feel your way through a point of reference that you recognize; if you don’t even know what room you’re in, seemingly easy solutions like tracing the walls doesn’t help.

Eating food is easier than I thought: I was able to microwave food (after solving a mini-challenge of figuring out where the respective buttons were, since they are contactless), make a sandwich, eat cereal and fruit and drink water.

Listening to TV was intriguing. I had to imagine what was going on simply based on what I was hearing. While most times it was doable, it was actually not effortless, comparable to reading a book. I enjoyed this different way of “watching” TV, but I just couldn’t do it for extended periods of time. It’s curious that losing a sense is a very efficient way of limiting–but not eradicating–one’s TV intake.

I thought I would have problems with typing because while I usually don’t look at the keyboard, I calibrate myself occasionally (and subconsciously) by glancing at where my hands are. I was worried that I would be off by one key often. Fortunately, thanks to the excellent accessibility feature of OS X and a mental model of the keyboard that I quickly established, helped me type as efficiently as when I was able to see. Keeping my palms on the laptop in a fixed location helped immensely.

Overall, I am very impressed by the accessibility feature in OS X. I’ve been able to use my computer for listening to music, reading and composing email, and writing. Apple has done a great job making the computer usable for the blind.

While it’s commonly thought that shutting off one sense makes the others more acute, at least in my case it was somewhat more complicated. I would say that I was able to perceive much more than before if I focused on a particular sense. For example, I would perceive sounds pretty much the same way as before blinding myself, but if I focused on a particular song or, say, noise in the kitchen, I was able to extract much more information from it. I could explore food with much more detail and expression than before; for example, I was able to tell the individual herbs that went into the chicken breading. I think an overall improvement of other senses is probably something that takes some time as your brain learns that it can no longer rely on the sense of sight; in the short term, the improvement of other senses during a focused effort is probably due to decreased information “noise” coming from the eyes.

Being blind also completely reshuffles what is easy and what is hard to do on a daily basis. I can receive phone calls but not make them; I don’t know what T-shirt I’m wearing. I am forced to process information much more slowly which means I can’t, for example, go through many blog posts but I’m enjoying listening to this audiobook because I can more easily create a visual representation of what is happening (the book was the Picture of Dorian Gray). Normally listening to audiobooks is somewhat painful to me — now I believe that it’s due to the “visual noise” effect.

This kind of visual sensory deprivation causes me to form certain images in my imagination, as if I was seeing them. They are usually just patterns that slowly transform into other patterns. I can’t see color yet (with the exception of a tiny blue speck of light I just saw surrounded by nothingness). This experience is uncannily like being in a dream (I also have difficulties distinguishing colors in my dreams).

I have no perception of time (ironically, I caught myself actually wearing a watch all day) or any sense of how dark it is. Even though my friend told me what time it was, I hadn’t internalized that the sun had already set. I had this strange feeling that it’s early afternoon most of the day. Overall, I’d say that time moves much faster than normally.

After the first few hours have passed I moved on from being in awe to wanting to be effective. I quickly began to look for objects around me that helped me quickly orient myself. For example, I used the carpet in my room as a reference area — I know, for example that as I follow the carpet along its perimeter I will be moving around the room and at any instant I will have a good mental image of what’s around me.

I find edges much more important than shapes; edges are something that i can trail; shapes lose their intricacies when all you have is two hands moving in three dimensions somewhat coarsely. Connections between objects and their function become much more important than their form.

Day two.

My morning routine took significantly less time than yesterday. This time I’ve been using my other senses more to orient myself in the space. For example, I’d listen to the ceiling fan and based on my perception of where I was relative to it, I was able to move around the room faster. I think I’m also slowly memorizing some distances, for example the distance in steps between my bed and the bathroom. I’m not doing it consciously but obviously in the absence of visual stimuli I have to find accurate and reliable substitutes.

My dreams were richer, fuller, but I haven’t noticed any difference between how I used to dream before the experiment and now. Writing is tougher: perhaps it’s because I’m a visual thinker and not seeing the body of the text I’ve just written makes it difficult to create structure. Writing when blind, even with my computer speaking every word as I type it, is more like on-the-fly storytelling than story construction. The only difference is that I can take my time — as a result the prose is more expressive, flows more naturally, is easier to listen, but has holes in structure.

I’ve worked out some tricks to help me get through the day. When pouring liquids, I put my finger in the container so I can feel the level of liquid and not let it overflow. Similarly, I’d check with my finger whether I put enough toothpaste on the toothbrush. I pour the shampoo slowly on my hand and try to figure out how much of it I poured based on the cold feeling that shampoo has on the palm of my hand.

I think I fidget much more now, again probably due to sensory deprivation.

The most challenging, but also the most remarkable difference is in how I process information. Without the sense of sight, all processing is linear: I have immediate access to the last few words, or bars (if writing music), and the rest has to be filled by my brain. Instead of focusing on structure, I need to think about flow — one thought transforming into another; one world blending into another. I produce much less, but what I produce is richer because it has to stand on its own, be engaging at all times. It’s stateless.

Making music was a great experience — in fact, I think I will continue to experiment with music when blind. I’d find myself not to cling to the same keys as I always do. Recording music is tricky but other than that I felt much more creative. Perhaps, if you don’t see the white and black keys, you start focusing on what’s behind them rather than on them.

Naturally, I am more aware of what is where now. While previously my brain could be lazy (it didn’t have to compose elaborate models of the room and objects within it, because all it took to know was a quick glance), now the cost of gathering information is relatively high: I have to look around and feel my way around so I remember much more. I know where all the articles of clothing are in my room. I know what’s on the night stand, in order from left to right. I remember where I put things.

Going about my life was fairly easy when everything around me was in my control. But when things changed and I wasn’t aware of them, I found it fairly difficult to adjust. For example when some dishes were rearranged, it took me a long while to re-adjust. After I’d noticed the world is different from my model of it, I would have to rebuild my model.

I found it pretty easy to interact with other people. In fact, the lack of visual “noise” meant that I could engage much more in what the other person was saying. I remember these conversations better now.

I took my blinds off on Monday morning. There was no “epiphany”; I also wasn’t bothered by light. Curiously, my right eye (the good one) exhibited similar problems as my left one has always had. This was temporary, but I think it means that the “reboot” theory might actually work — the brain weakened the connection to my right eye. It hadn’t been weakened enough to eliminate the bias, but it was a good start.

While the moment immediately following the regaining of sight wasn’t spectacular, the following thirty minutes were… surreal, to say the least. I felt a little out of it, as it the world around me had undergone some strange transformation while I was away. Perhaps that’s what (temporarily) regaining depth perception feels like (I have non because of amblyopia).

In all, I felt empowered to do some of the things I was able to do before, and was impressed to be able to get more from some others. However, I wasn’t as productive as normally. True, part of it was the fact that I’ve only been blind for two days — I am sure that people who are actually blind have perfected the routines that took me an hour to do. It’s also not at all certain whether that loss in productivity more than repaid itself in the higher quality of the work I came up with during those two days.

Editorial Note

After I published this, I received a comment from a person named Jeremy, which I wanted to include here:

As a blind person, I am a little upset by your generalizations about the blind experience. all your obstacles could have been overcome with a little bit of creative thinking and some adaptive aids. I manage quite well with a screen reader as you mentioned but my cell phone also speaks as well as my speaking/braille watch. I hope that your realizations are taken with a grain of salt, as you didn’t really get a chance to fully accept the differences and adapt over time.

 

The Zoom Effect

(Originally published on October 19th, 2009)

Before using Squarespace, I built my own front page. As I considered the best way to display series of pictures there, I came up with an interesting way to compress a lot of information onto fairly limited screen real estate. The idea was to have a kind of a slide show composed of small icons that turn larger as you hover over them; clicking on any icon would bring up with full-size image. That way I could fit a lot of small (32×32 pixels) icons of images on the screen, yet offer the users the ability to browse larger versions (67×67 pixels) easily just by moving the mouse around. The idea, of course, was inspired by what OS X does with the Dock (an effect which, sadly, I have disabled on my computer–but due to different use scenarios). Here is the effect in action (roll your mouse over the images):

 

The design process I went through is an interesting example of discovery (or serendipity, rather) and how taking an analytical approach doesn’t always yield the best results.

The desired effect will be very familiar to you if you’ve used OS X and the Dock. I want to display a series of small thumbnails of images in a row. If you hover over them, the image that your mouse is closest to gets larger, pushing out the other images if necessary. I wanted the effect to be smooth (so as you move your mouse over the row, images get bigger as they approach the mouse pointer, and then get smaller) and resemble something like this:

The Zoom Effect

The Zoom Effect

There are three variables that I need to be concerned about: how much to magnify the icons by (in my case, I wanted to go from 32 pixels to a maximum of 67 pixels), how far out the magnification should affect the icons (in the picture above, the icons two to the right of the center icon are no longer magnified), and how quickly the magnification should drop out (how “drastic” the magnification of the center icon should appear). For each of the icons in the row, I need to figure out how much to magnify them (by convention, let’s say that 1 is full magnification and α is the regular, small size) and where to place them horizontally (because they will push out other icons), subject to the constraint that the icons must remained aligned in a row.

An analytical solution was easy to get to, but very quickly spiraled out of control, and here is how. Let’s consider two configurations:

  • When the mouse cursor is exactly in the center of an icon, by symmetry that icon should have the maximum magnification:

Maximum magnification at center of icon

Maximum magnification at center of icon

  • When the mouse cursor is exactly in between two icons, also by symmetry both icons should be of equal size:

Equal magnification in between two icons

Equal magnification in between two icons

 Depending on β, the magnification will drop out quickly (if β is close to α) or slowly (if it’s close to 1).

Since we want the magnification of the icon to be a smooth curve (as the mouse pointer moves across the icons), we simply need to define a continuous function given the three points it goes through: (0, 1) (because atx=0 — i.e. when the mouse cursor is exactly over the icon’s center, we want the magnification to be maximum), (α/2, β) (because when we’re in between two icons — i.e. a distance α/2 away from the center of one — we want the magnification to be β) and (Z, α) (the distance at which all magnification ceases). An exponential curve is the simplest one that we can try:

Magnification as a function of distance from the icon's center

Magnification as a function of distance from the icon's center

 We will then be able to use this curve to determine how much to magnify each icon by. The icons will be sized such that their size given the distance between their center and the mouse pointer can be read off of that magnification curve:

Applying the magnification curve to each icon. Past the point Z all icons retain their original, small size

Applying the magnification curve to each icon. Past the point Z all icons retain their original, small size

 

First let’s figure out the full form of the magnification curve. The curve must go through the two endpoints we identified, and be exponentially decaying, so it is of the form

\[y = 1 - \left(\frac\right)^P\cdot(1-α)\]

(We can verify that at 0, y=1 and at Zy=α). We need to compute P based on the third point:

\[β = 1 – \left(\frac\right)^P\cdot(1-α) \Rightarrow P = \text\right)\]

The first icon is simple: determine the distance between the mouse pointer and the center of the icon and use the curve above to read off the magnification (it will be something between β and 1). The subsequent icons are a little more tricky, because in order to figure out the magnification you have to know how far its center is from the mouse pointer, but the position of the center is a function of magnification! At this point the easiest thing to do is to solve this numerically, by simply iterating over all possible positions of the center and determining the closest one (since we’re operating in a discrete space with the smallest effective resolution of 1 pixel).

While each step seems fairly straightforward, the end result is a pretty big hairball. Being lazy, I realized that there must be a better solution to this problem.

And then I realized that so long as the illusion of smoothness is preserved, some simplifying assumptions can be made. First of all, the exponential curve I used initially was too complicated and looked too discontinuous at large magnifications (because of a sharp spike near 0); there must have been something else that’s straightforward to compute. The parameters seemed complicated, too — α and Z could be replaced with just one — a measure of how quickly the magnification should decay — without much loss of the effect.

The Normal curve came to mind — with just one parameter (σ) it was much easier to experimentally determine a value that had a pleasing effect (plus, σ is by definition very close to our notion of “how quickly this should decay”). I also got rid of the self-referential problem (determining magnification requires knowing origin, but origin influences magnification) by looking at not the actual distance (how far is the icon from the mouse pointer after all icons have been magnified), but original distance (how far is the icon from the pointer before magnification).

The resulting algorithm is much more elegant — and produces a more visually pleasing effect:

  • For each icon in the original (i.e. before any magnification takes place) series, determine how far its center is from the mouse pointer (I experimented with just using the x-coordinate, but the nice thing about this algorithm is that any smooth function works, and the actual distance produced a nicer effect than just the horizontal distance)

  • Use the Normal curve to determine its magnification. We want the result to be 1 if the distance is 0 (i.e. the icon is directly under the mouse pointer) and α if the distance is infinite (since the Normal curve dies off quickly, the size would go down to α pretty quickly as well), i.e.

\[N = e^\]

\[M = N+α(1-N)\]

  • Place each icon with its magnified size on screen; keep track of how much space each icon took so that subsequent icons can be displayed after it and not on top of it

  • Technically this is enough for magnification. However, this doesn’t produce a smooth effect: since the icons are always pushed out to the right, the “tail” of icons keeps traveling back and forth. We want the entire series of move smoothly, slowly to the left as the mouse moves to the right (go here and watch the icons at the end of the series travel to the left as you move your mouse pointer left to right, across the icons). This is simple to correct, though: keep track of how much space all the icons take (by adding up each size as you go), and then offset all the icons by a fraction of that total space, depending on where the mouse pointer is: suppose the icons originally take d pixels, and expanded they all take D pixels, and the mouse pointer is at position x (between 0–at the beginning of the series–and d), we want to offset all icons by

\[x\cdot\frac\]

See also

Blinker Frequency

 (Originally published on October 9, 2009) 

How your blinkers never blink with the same frequency as those of the car in front of you

If you’re a driver, you no doubt spend a nontrivial amount of time waiting at an intersection, another car in front of you, you both wanting to turn left. You probably noticed that the turn signal of the car in front of you doesn’t blink with quite the same frequency as one in your car.

In fact, I have a strong suspicion that no two cars have the same frequency–at least that’s what it seems to me since I can never find turn signals to be in phase.

And so here I am, waiting for the light to turn green, with two blinkers flashing with different frequencies. I don’t like to do nothing, so I often figure out what this difference in frequencies is. It’s not as hard as it may seem–and it involves no measuring devices! It’s a pretty cool trick that takes advantage of the fact that while it’s hard to measure or compare quantities (such as speed, frequency), it’s relatively easy to detect synchronicity. First, figure out which blinker is faster. Then wait for both blinkers to be momentarily synchronized (i.e. for both to flash at the same time). Count how many times the faster blinker flashed before both are synchronized again (make sure you don’t “skip” a cycle). If the faster one blinked n times, and you captured the cycle correctly, the slower one blinked n-1 times so the faster one is n/(n-1) times faster. I like to go a step further and memorize what fractions of the form n/(n-1) come out to be as percentages to impress people with some percentage estimates while sitting in the car, with no calculator. For example, if the slow one blinks 9 times and the fast one blinks 10 times, the fast one is 11% faster.

This trick works for car blinkers in part because most cars’ blinkers flash with similar but not the same frequency (if the ratio of frequencies is fairly large, you will find it hard to not skip steps–that is, the blinkers will not synchronize fast enough). I also like to go to the gym, get on the treadmill and figure out how fast the person next to me is running by observing the synchronicity of the markings on the treadmill (treadmills have their brand names displayed on the belts)–the trick works because most people run at similar speeds (between 6.5 and 8.5 mph) and, moreover, because people tend to run at quantized speeds, I am often able to figure out the speed precisely.

 

A large group trying to go somewhere

(Originally published on September 18th, 2009)

Very frequently I’ve been in a fairly large group of people (5 or more) and we were all trying to go somewhere, say, a movie theater. I noticed that the amount of time it took us to actually get going increased pretty rapidly with the size of the group. This fact by itself should be no surprise to anybody; I have a feeling, though, that the amount of time is super-linear with the number of people, and perhaps it’s even super-polynomial. Let’s see if we can derive this relationship. We’ll make some simplifying assumptions but the gist of the problem should be captured.

Suppose you have \(n\) people in a group. Every person is mostly ready to leave, with the exception of a small number of tasks the person has to do (or can do, given enough time) — you know, the “If we’re not going to leave in the next five minutes, I’m just going to quickly go to the restroom” sort. Assume that each person experiences some distribution of such “events” which derail the effort of leaving. The duration of such events is also a random variable. The group can’t leave if at least one of its members is currently occupied with an event. Let’s say that the probability at any given time that a person is not occupied with an event is \(p\) (\(p\), therefore, is the measure of “readiness”; of course, if \(p=0\), the group will never leave; if \(p=1\), the group will leave at time \(t=0\)).

Assume it takes \(n\) people time \(t\) to leave. Now add a new person to the group. The group will leave at the expected time \(t\) only if the new person happens to be free at that time (with probability \(p\)). If not, the group will have to wait; assuming the events are independent, this will take another time \(t\) (at the end of which the new person may or may not be free). The expected time is therefore

\[tp + 2t(1-p)\cdot p + 3t(1-p)(1-p)\cdot p + 4t(1-p)(1-p)\cdot p + \cdots\\= tp\left(1 + 2(1-p) + 3(1-p)^2\right) + \cdots\]

Let

\[S=1+2a+3a^2+4a^3+\cdots\]

Then

\[S = (1+a+a^2+a^3+\cdots) + (a + 2a^2 + 3a^3+\cdots) = \frac{1}{1-a} + aS\]

Hence

\[S = \frac{1}{(1-a)^2}\]

So the above becomes

\[\frac{tp}{(1-(1-p))^2} = \frac{tp}{p^2} = \frac{t}{p}\]

Suppose \(p=0.5\). The expected time is \(2t\): an additional person doubles the amount of time it takes for the group to leave. The amount of time is therefore not only super-linear in the number of people, it’s actually exponential!