Thursday, 13 June 2019

I was in London a little while ago, working on the 18th floor of an office-block. There is an express lift which gets from the ground to the 18th floor (60m up) in 15 seconds non-stop. These graphs were produced using the Phyphox app on my phone, measuring the changes in air pressure with altitude (like the altimeter in a plane) as well as vertical acceleration.

The first graph was from ground straight up to the 18th floor.

The second one was from the 18th down, but with stops on the 17th and 16th (other people wanting to use the lift and messing with my consideration !) - hence the "staircase" effect for the first 45 seconds or so.

Here is a non-stop trip from the 18th floor down to the ground floor

The locals tell me that when the lifts went in first, they used to be even faster but they hurt peoples' ears and had to be slowed down. I have no trouble believing that.

The cabin of an aeroplane is also an interesting place to keep an eye on air pressure. These two were air pressure measured inside the cabin of the plane during take-off and landing (I'm probably on a no-fly list now).

You can see the cabin-pressure gradually drop from ground-level pressure to around 760hPa as the plane climbs and gradually increase from that back to ground-level pressure again as the plane comes in to land. The change is made gradually over the course of about 14 minutes each time.

Of course the cabin of the plane is pressurised and outside air-pressure at the cruising altitude of 36,000ft is far lower than 760hPa. The cabin pressure is equivalent to being at an altitude of around 2,300-2,400m (a good bit less than ⅓ of the way up Mount Everest).

The Phyphox app really is an awful lot of fun. It gives you direct access to all of the sensors on a modern smartphone (and there are lots of them) and lets you run experiments like these with them.

Sunday, 21 October 2018

Experiments with an AD633 Multiplier IC

Something I have always found a little confusing was the idea of signal mixing.  The source of my puzzlement probably has its roots in the audio world of mixing desks for combining the sound from instruments and singers for music recording or live performance.  These mixing desks are (or are supposed to be) additive: their output should be the simple arithmetic sum of all of the inputs, scaled according to how far the sound engineer pushes the fader.

In the RF world, mixing means something quite different.  When we mix RF signals, we (usually) specifically want them to interfere with each other to create frequencies that weren't present in either (because we're usually talking about two) of the original signals.  That is usually the last thing in the world you want to happen in the audio world, but it how a heterodyne radio works (more on this a little later).  For this to happen, the RF signals need to be mixed in a non-linear way, usually by multiplying them together.  It took a while before this difference dawned on me.

The maths behind mixing in the RF sense of the word is pretty straightforward.  It relies on the trignometric relationship:-

sin(a) ✕ sin(b) = ½sin(a+b) + ½sin(a-b)

In plain English, if two sinusoidal signals are multiplied together, the resulting signal will consist of two sinusoidal signals at the sum and difference of the two original frequencies. 

That has to be tried.

Inspired by some of the many great W2AEW videos (Alan is my hero!) - specifically this one and this one - I did some some experiments with multiplying a sine wave by a lower-frequency square wave using the diode-switching technique and got some interesting results.  However, the real fun was had with an Analog Devices AD633 multiplier IC.  Its a little beauty: it takes in two signals (differential) and outputs the result of multiplying them together (÷10).

Here is a 1KHz sine wave being multipled by a 800Hz square wave (the yellow trace is the result):-

Its pretty easy to see what's going on here - the inversions of the sine wave when the square wave changes state are pretty evident.

Here's 10KHz and 800Hz:-

Its a little less obvious here, but if you look carefully, you can still see the sine wave change phase by 180° when the polarity of the square wave changes. 

We'll come back to this in a bit.  For now, let's try multiplying two sine waves together, one at 10KHz and one at 800Hz.  In theory, the result will be a signal which is the sum of two new sine waves, one at 9.2KHz and one at 10.8KHz.

Bingo !!  The purple trace is the FFT and - sure enough - there are two (and only two) peaks at exactly the frequencies that the maths predicts.  Note also that there is none of the original 10KHz signal present (there is no 800Hz signal there either, although it would be off-screen to the left even if there was).

Returning to the 10KHz sine wave and the 800Hz square wave: a square wave consists of the sum of the odd harmonics of the fundamental.  So our 800Hz square wave has components at 800Hz and 2400Hz and 4000Hz etc.  So multiplying this by our 10KHz sine wave should produce an output with tones at 10KHz +/- (800Hz, 2400Hz, 4000Hz...).  Let's see if it does:-

Yep.  The cursors show the fundamental.  The horizontal divisions are 1.25KHz each, so you can see that the other tones are all about 1600Hz above and below the fundamentals.  Again, note that there is no sign of the original 10KHz signal at all...just at the theory predicts. 

"Why?", you might be tempted to ask.

The plan for the AD633 is to use it as the basis of a bat detector.  Bats' echo-location sounds are somewhere around 48KHz (at least the Pipestrelle bats that are common in Ireland) - far above the range of human hearing.  The idea is to mix the signal from an ultrasonic transducer (hopefully from lots of hungry bats) with - say - a 40KHz sine wave, thus hopefully shifting the bats' sounds into the human-audible range.  This is the same basic principle on which a heterodyne radio receiver works, with a local oscillator being mixed with the target RF signal to produce a intermediate frequency which is then detected with a high-Q filter.  

Saturday, 23 June 2018

555 Timer IC

I have been studying the classic and venerable 555 Timer IC recently and have created an animation showing the various signals on it as it moves from state to state.

Also a video...

Saturday, 26 August 2017

Wireshark Traps For Young Players

I ran into an interesting problem in work today.  An Ethernet switch appeared not to be behaving correctly: frames that I know were being received on an 802.1q trunk (coming in from a WAN carrier) weren't being sent out an access port for no reason I could see.  The switch configuration was fine...there was no obvious reason it shouldn't have been working.

A bit of background.  

The IEEE 802.1q header has a bit in it call the CFI bit.  CFI stands for "Canonical Format Indicator". It is (or was) used to distinguish between Ethernet-format MAC addresses ("canonical") and Token Ring-format MAC addresses ("non-canonical").  Basically, this bit is set to 0 on Ethernet frames and 1 on Token Ring frames.  Since Token Ring is (mercifully) now a thing of the dim-and-distant past, this bit is now always set to 0 (which is why most people - including me - have never noticed it and may be oblivious to its existence).  The 802.1q standard specifies that frames received with the CFI bit set (indicating Token Ring frames) should not be transmitted on untagged Ethernet interfaces (because the MAC address format is different and won't make any sense).  This will become important later.

The more recent 802.1ad specification which - among other things - codifies "Q-in-Q" (a.k.a. double-tagging) renames and repurposes this bit so it is now called the DEI bit.  This is where my problems started, but more on that later.  DEI stands for "Discard Eligible Indicator" and is a hint that this packet can be discarded in the event of congestion on a link.  It is in the same bit position as the CFI bit just has a completely different meaning now.  

This gets me to my first problem.  I was told that my carrier was sending me tagged frames with the DEI bit set (there was a legitimate reason for this).  I figured that my Ethernet switch might interpret the DEI bit as a CFI bit (remember, same bit position in the 802.1q header) and might (correctly, from its point of view) refuse to send the frame on an untagged Ethernet interface.  Sure enough, once the carrier stopped setting the DEI bit, everything started working OK.

That would be the end of the story except that when I ran packet-captures to see what was what was going on, the frames coming in from the carrier had the DEI bit clear.  I can think of a few possibilities:-

  • My tip-off that the carrier was setting the DEI was incorrect and there was something else going on. Not likely since the problem went away as soon as the carrier changed this (and my theory around this seemed pretty sound).
  • The interface-mirroring on the switch was "lying" about the CFI/DEI bit being set (which didn't seem hugely likely either)
  • There was a problem with my packet-captures.

After some research, I found this (short but very informative) thread  The TL;DR version is that the Linux kernel "abuses" the CFI/DEI bit for its own internal purposes and always clears it !  This was fine when the bit was CFI (and therefore always 0), but it is distinctly not fine now that the bit should be interpreted as DEI.  As it happened, I had done all of my packet-captures on Linux machines 😞

So I set up an experiment.  I have a machine transmitting Ethernet frames (using this with with the following characteristics:-
  • IEEE 802.1q encapsulation (Ethertype 0x8100)
  • VLAN tag 120
  • CFI bit in the 802.1q header set to 1
Then I used Wireshark to capture these frames on a few platforms.  The results were eye-opening.


Sure enough, the 802.1q header and the VLAN tag are there, but the CFI/DEI bit is set to 0.  Bingo !!


The second packet-capture is done on the same NIC on the same laptop (it dual-boots Windows and Linux)

Even worse !!  Under Windows, I don't even get to the the 802.1q header.  I tried this on two different Windows machines (both Windows 10, admittedly) with the same results.  This could really send you off on a wild goose-chase if you were trying to troubleshoot a real problem.


Perfect: I can see the 802.1q header and the CFI/DEI bit is still set to 1, just as it should be.

The message thread indicates that this only happens with if the Ethertype value is 0x8100 (802.1q) or 0x88a8 (802.1ad) but not if another Ethertype is used.  Let's try that:-

Look at that !  Changing the Ethertype from 0x8100 to 0x8101 and now the Linux laptop leaves the CFI/DEI bit untouched (I had to twist Wireshark's arm to get it to parse 0x8101 as an 802.1q frame).  That makes sense: the Linux kernel no longer sees the frame as an 802.1q/802.1ad frame and therefore doesn't see the following two bytes as an 802.1q/802.1ad header that it can mess with.

The first lesson is that one needs to be very circumspect about packet-captures unless you are very sure about the behaviour of the platform you are doing the capturing on.

Second, if you have a WAN provider setting that DEI bit (e.g. on a "best-effort" WAN service), be aware that your switch may be refuse to pass those frames through.

Monday, 13 June 2016

ATX Bench Power Supply Status LED

Like lots of electronics hobbyists, I have an old ATX power supply salvaged from a PC that I use as a bench power supply.  It cost practically nothing and it works well enough.

However, like any self-respecting geek, I can't look at anything for any length of time without thinking of ways it could be improved. The target of my ruminations today is that LED in the centre of the picture. It was originally wired to that it would glow dimly when the PSU was in "standby" mode (i.e. connected to the mains, but not supplying output to the front panel) and brightly when the PSU was turned on.  This was easy to do (see below), but not a good idea.  It was practically impossible to tell at a glance whether the LED was "bright" or "dim".  It was just crying out to be changed.

Among other things, the ATX power supply outputs two useful signals:-
  • 5VSB : 5V standby, a low-current 5V source that is available when the rest of the power-supply is in standby mode.  This is on the purple wire of the ATX connector.
  • PWR_OK: A logic-level signal that goes high when the power supply has turned on and stabilised. This is on the grey wire of the ATX connector.
In my original version, I simply wired both of these to a red LED via two different resistors: 4.7K on the 5VSB line and 470Ω+1N4148 diode on the PWR_OK line.  If only the 5VSB line was powered, the LED glowed dimly (through the 4.7K resistor) but if both were high then the LED was powered via the 470Ω resistor and glowed more brightly.  The 1N4148 diode on the PWR_OK line was there to stop it sinking the current from the 5VSB line if it went properly low.

I decided I wanted to use a multicolour LED instead, red for "standby" and green for "on".  Here is what I came up with:-

When the power supply is in "standby" mode (5VSB on, PWR_OK off), current flows from 5VSB, through the red LED and R3 and hits the drain of Q2.  Because PWR_OK is low Q1 is switched off and its drain (and therefore the gate of Q2) is sitting at about 2-3V (the 5V from 5VSB minus the voltage drop through the green LED.  This is enough to turn on Q2 (which has a typical gate threshold voltage of 2.1V), so the red LED lights.  Virtually no current flows through the green LED to the gate of Q2, so it stays off.

When the power supply is turned on, the PWR_OK signal is asserted.  This turns on Q1 causing its drain voltage to drop to near ground.  This has the effect of turning on the green LED while simultaneously pulling the voltage at the gate of Q2 to ground, causing it (and therefore the RED LED) to turn off.

The 100K resistors are really precautionary: the gates of the MOSFETs draw no current at all.  They are only there to limit current if a MOSFET dies in a way that shorts its gate to ground.  They could be omitted and the circuit would still work just fine.

The whole thing fitted onto a bit of perfboard around 25mm2 which was encased in some transparent shrink-wrap. Its not pretty but it works well:-

There may well be better ways of doing this (suggestions welcome in the comments below).

Friday, 20 May 2016

Crimp vs Solder...fight !

For a project I'm working on, I have a requirement to butt-join some power cables together.  I have been soldering them, but for reasons that needn't detain us here they are an awful pain in the ass to solder.  I wondered was there any downside to using butt-splice connectors like these

My concern was that the contact resistance might be significantly higher than for a soldered joint leading to a significant voltage drop across them (they will be carrying a fair amount of current from a 3.3V power supply).  Time to do some testing.

I started out with four pieces of wire of (approximately) equal length. Three of them I cut in half and then joined back together using one of those butt-splice connectors, a solder joint or a simple twist. The fourth piece of wire I didn't cut: it will serve as the control.

So what is the resistance of each of these?  It is going to be pretty low no matter what, so a four-wire measurement is going to be needed. Happily, I have a 5.5 digit multimeter that does have four-wire resistance measurement capability.

This is what the test setup looks like. One pair of multimeter leads provides the test current and the other pair senses the voltage across the joint.

The results were pretty good:-

Joint TypeMeasured Resistance (mΩ)
Uncut Wire8

Treating the uncut wire as a control, that means that there is only around 2-4mΩ of contact resistance regardless of what method is used.  I have to admit, that's a good bit less than I was expecting.  There is also less difference between the solder joint and the butt-splice joint than I had expected.

As a secondary check, I also measured the voltage drop across the joint while running a decently-high current through it.  My bench power supply can supply a current-limited 5.8A.  Knowing this and measuring the voltage drop across the joint is a secondary way of measuring the contact resistance. Here are the results:-

Joint TypeMeasured Voltage Drop (mV)Calculated Resistance (mΩ)
Uncut Wire20.43.47

Curiously, the uncut wire measures a little higher than the solder joint. My suspicion is that the current-limiting on my (el-cheapo) power-supply isn't that rigid and it was letting a little more current through as it warmed up.  This is borne out by the fact that the last digit on the ammeter showed the current climbing just a little as the tests went on (just a few tens of mA over the course of the tests) and I did the test with the uncut wire first.  I could have controlled for this by monitoring current with a external - more accurate - ammeter, but I didn't and I think the discrepancy is small enough to ignore (for my purposes, anyway). If anyone is interested in seeing the test repeated a little more rigorously, I do have enough equipment to do it: leave a comment below and I'll redo the test.

I think the takeaway is that a solder joint is a little better than a crimped butt-splice joint, but not by a whole lot.  There are certainly factors that I haven't taken into consideration here: will a solder joint age better than a crimped-on connector (I rather suspect it will)?  Which one is mechanically more stable (again, my money is on the solder joint)?  These don't matter in my particular application so - happily - the result is that I should be able to use the much-easier-to-make butt-joint method for joining the cables I need to join.  A win !!

Monday, 29 February 2016

Proportional Representation in Ireland

I am not connected with politics or constitutional law in any way. I wrote this page really as an excuse to study up on something I found interesting. I have done my best to ensure that the content is accurate but if you do find any glaring errors please let me know and I will correct them. If you have any general comments/observations/suggestions about the page, I'd love to hear those also. Thanks.


Since achieving independence in 1921, we in Ireland have elected members to The Dail (one of our Houses of Parliament) using a system of voting called Single Transferrable Vote (article 16, section 2, subsection 6 of The Constitution). The purpose of this page is to explain exactly how this system works, to explain why many political scientists regard it as one of the fairest systems for conducting elections and to examine some of the mathematical anomalies which can arise from it.

Single Transferrable Vote

The basic idea underlying the Single Transferable Vote system of Proportional Representation is that any votes a candidate receives over and above the minimum necessary to be elected should not be wasted. Rather, the voters should be allowed to rank the candidates in order of preference, and any surplus votes a candidate has after being elected should be distributed to the remaining candidates in proportion to the next-highest preferences expressed by each voter.

Some history of the system to go here !

How It Works

The Vote

Voting is simplicity itself. The ballot paper will look something like this:-

Byrne, Gay
Independent Candidate
Haughey, Charles J.
Fine Gael
Lawler, Liam
Progressive Democrats
McGonigle, Eamonn
The All Night Party

The voter simply writes a number beside each of the candidates (or as many candidates as they wish) indicating the order of preference. The voter's favourite candidate will get a preference of 1, the second favourite will get a preference of 2 and so on.

Determination of the Quota

Once the voting is complete, the process of counting the votes begins. The first step is to determine the quota. This is the minimum number of votes which would be sufficient to elect only the desired number of candidates. It is a factor of

  • The number of votes cast
  • The number of people who are to be elected
It is calculated by taking the number of votes cast, dividing this by (1 + the number of seats to be filled), adding 1 to this result and discarding any fractional part. So in an area where 10,000 votes were cast and there are 2 seats, the quota would be 3,334 (= 10000 ÷ (1 + 2) + 1). It should be clear that with only 10,000 votes, it is possible for two candidates to obtain 3,334 votes, but not possible for three.

The First Count

During the first count, all of the first preference votes are counted. At the end of this, one of two things will have happened:-

Possibility #1: One or more of the candidates has reached the quota

These candidates are elected. The next step is to distribute their surplus votes (the number of votes they received over and above the quota). The idea is that these "spare" votes should not be wasted and should be transferrable to the voters' second choices

Possibility #2: None of the candidates has reached the quota

In this case, the candidate with the lowest number of votes after the first count and his/her votes are distributed to the voters' second preferences

Either way, the next step is to proceed to a second count for the purpose of distributing the surplus of the elected candidate or distributing the entire vote of the eliminated candidate.

The Distribution of the Surplus or The Elimination of the Lowest Ranking Candidate

Once a candidate is reaches the quota, any excess votes are distributed to the other candidates in proportion to the next preference votes of the elected candidate. A simple example: Assume that the quota is 3,334 (from our earlier example). Assume also, that candidate Charles J Haughey gets 4,000 votes on the first count. None of the other candidates reaches the quota at this stage. He is deemed to be elected and a second count begins for the purpose of redistributing Charlie's surplus.

At the end of the second count, it turns out that of those 4,000 voters who gave Charlie their first preference, 2,000 gave their second preference to Liam Lawler, 1,500 gave their second preference to Gay Byrne, 400 gave their second preference to Eamonn McGonigle and the remaining 100 didn't express a second preference at all.

Since the quota is 3,334, Charlie has 666 surplus votes to distribute (= 4,000 - 3,334). These will be distributed to the other candidates in the following proportions:-

CandidateTransfers from Charles Haughey
Liam Lawler(666 ÷ 4,000) × 2,000 = 333
Gay Byrne(666 ÷ 4,000) × 1,500 = 249
Eamonn McGonigle(666 ÷ 4,000) × 400 = 66

These transfers are added to each candidate's total from the first count. At this stage, the process begins again: If any candidate has now reached the quota, he/she is elected and another count begins for the purpose of distributing his/her surplus. If no candidate has reached the quota, the lowest candidate is eliminated and the next count will redistribute his/her vote to the next voters' next preferences.

There are some subtleties that arise in the distribution of quota which lead to a small element of randomness.  It can happen in three different circumstances, the most common of which is where voters haven't ranked all of the candidates (i.e. haven't written any number down against some candidates).  In that case, there may be excess votes available to be distributed (e.g. Charlie's surplus of 666) but some of those votes might not indicate a next preference.  Its probably not useful to delve into this in too much detail here: there is a very interesting analysis of this here, including an example (the Sligo-Leitrim constituency in the 1982 general election) where this randomness actually made a difference.

Interestingly, during Ireland's brief and ill-fated dalliance with electronic voting in 2002, the same random mechanism had to be built into the (electronic) counting system, even though a computer could have performed the count using a more accurate (but impractical for a manual count) system.

The Final Result

The final result is arrived at either when either:-

  • The desired number of candidates have reached the quota
  • The number of candidates remaining after the elimination of the lowest-scoring candidate at the end of a count equals the number of unfilled seats

This can happen because voters may not rank all of the candidates on the ballot paper. In our earlier example, the 100 voters who did not express a second preference will not have made any contribution to the second (or any subsequent) count.

But Is It Democratic...?

This system is regarded as among the fairest systems of the electoral systems in use in the world today. However, it is not without its imperfections. These are discussed in some length in put book reference here in considerable detail.