Bypassing Sonos Updates Check

Sonos has screwed up their update process a few times. What happens is that they publish an XML file showing that a new update is available but forget to publish the update itself on their servers. Since the update check is a mandatory part of the installation process, they effectively lock out new customers from using their software. This most recently happened with the 4.2.1 update for Mac.

There’s a way to bypass this: redirect the site “” to (localhost) using your system hosts file. When you launch the Sonos Controller software for the first time, it will try to connect to your own computer to get the XML file. Instead, it will fail to connect and then move forward in the setup process. After that, you can re-enable access to the updates server.

Specific guides:

The exact entry you want to add is:

Be sure to remove this line after you’ve gotten your system setup so you can get updates in the future.

An appeal for support

If you enjoy this blog and want to give something back, I’d like you to consider a donation to WildAid. My wife Sandra has been organizing a film event and donation drive in order to raise money for saving wild elephants from extinction.

Elephants are being killed at an alarming rate. After they experienced a comeback in the 90’s due to laws restricting the ivory trade, a few policy mistakes and recent economic growth in Asia led to such an enormous demand for ivory that elephants may become extinct in the wild in the next 10 years. President Obama recognized this crisis recently with an executive order.

WildAid tackles the demand side of the problem. With the assistance of stars like Yao Ming and Maggie Q, they create educational public awareness campaigns (video) to encourage the public not to buy ivory, rhino horn, or other wild animal products. They distribute their media in markets like China, Vietnam, and the second largest ivory consumer, the United States.

If you’re interested in attending the film, it will be shown October 9th at The New Parkway theater in Oakland. Your ticket gets you dinner with live jazz, as well as a chance to see an educational and heartwarming movie about an elephant family. (The film is from PBS and does not show any violence; children 8 years old and up should be ok.)

If you can’t come, you can help out by donating on the same page. I keep this blog ad-free and non-commercial overall, so I appreciate you reading this appeal. Thanks, and now back to your regularly scheduled crypto meanderings.

Keeping skills current in a changing world

I came across this article on how older tech workers are having trouble finding work. I’m sure many others have written about whether this is true, whose fault it is, and whether H1B visas should be increased or not. I haven’t done the research so I can’t comment on such things, but I do know a solution to out-of-date skills.

The Internet makes developing new skills extremely accessible. With a PC and free software, you can teach yourself almost any “hot skill” in a week. Take this quote, for example:

“Some areas are so new, like cloud stuff, very few people have any experience in that,” Wade said. “So whether they hire me, or a new citizen grad, or bring in an H-1B visa, they will have to train them all.”

Here’s a quick way to learn “cloud stuff”. Get a free Amazon AWS account. Download their Java-based tools or boto for Python. Launch a VM, ssh in, and you’re in the cloud. Install Ubuntu, PostgreSQL, and other free and open-source software. Read the manuals and run a couple experiments, using your current background. If you were a DBA, try SimpleDB. If a sysadmin, try EC2 and monitoring.

Another quote from a different engineer:

‘If a developer has experience in Android 2.0, “the company would be hiring only someone who had at least 6 months of the 4.0 experience,” he said. “And you cannot get that experience unless you are hired. And you cannot get hired unless you provably have that experience. It is the chicken-and-the-egg situation. “‘

Android is also free, and includes a usable emulator for testing out your code. Or buy a cheap, wifi-only Android phone and start working from there. Within a week, you can have experience on the latest Android version. Again, for no cost other than your time.

I suspect that it’s not lack of time that is keeping unemployed engineers from developing skills. It’s a mindset problem.

‘He may not pick the right area to focus on, he said. “The only way to know for sure is if a company will pay you to take the training,” he said. “That means it has value to them.”‘

In startups, there’s an approach called customer development. Essentially, it involves putting together different, lightweight experiments based on theories about what a customer might buy. If more than one customer pays you, that’s confirmation. If not, you move onto the next one.

Compare this to the traditional monolithic startup, where you have an idea, spend millions building it into a product, then try to get customers to buy it. You only have a certain timeline before you run out of money, so it’s better when you can try several different ideas in the meanwhile.

There’s an obvious application to job hunting. Take a variety of hypotheses (mobile, cloud, etc.) and put together a short test. Take one week to learn the skill to the point you have a “hello world” demo. Float a custom resume to your targets, leaving out irrelevant experience that would confuse a recruiter. Measure the responses and repeat as necessary. If you get a bite, spend another week learning that skill in-depth before the interview.

There are biases against older workers, and some companies are too focused on keywords on resumes. Those are definitely problems that need changing. However, when it comes to learning new skills, there’s never been a better time for being able to hunt for a job using simple experiments based on free resources. The only barrier is the mindset that skills come through a monolithic process of degrees, certification, or training instead of a self-directed, agile process.

Baysec update and announcement change

The next Baysec is April 26, 7-11 pm at Irish Bank. Next month will be the fourth anniversary of Baysec!

I won’t be announcing these events on this blog any more because I’d like to reserve it for articles instead. The Baysec announcements are ephemeral and of no value to people outside the Bay Area.

I will still be posting Baysec announcements on the @rootlabs Twitter account. And if you want to participate in discussing Baysec events, please join the mailing list at It is very low traffic — less than 10 messages per month.

Old programming habits die hard

While programming, it’s enlightening to be aware of the many influences you have. Decisions such as naming internal functions, coding style, organization, threading vs. asynchronous IO, etc. all happen because of your background. I think you could almost look at someone’s code and tell how old they are, even if they keep up with new languages and patterns.

When I think of my own programming background, I remember a famous quote:

“It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”
— Edsger W.Dijkstra, June 18, 1975

Large memory allocations are a problem

A common mistake is keeping a fixed memory allocation pattern in mind. Since our machines are still changing exponentially, even a linear approach would quickly fall behind.

Back in the 90’s, I would make an effort to keep frames within 4K or 8K total to avoid hitting a page fault to resize the stack. Deep recursion or copying from stack buffer to buffer were bad because they could trigger a fault to the kernel, which would resize the process and slow down execution. It was better to reuse data in-place and pass around pointers.

Nowadays, you can malloc() gigabytes and servers have purely in-memory databases. While memory use is still important, the scale that we’re dealing with now is truly amazing (unless your brain treats performance as a log plot).

Never jump out of a for loop

The BASIC interpreter on early machines had limited garbage collection capability. If you used GOTO in order to exit a loop early, the stack frame was left around, unless you followed some guidelines. Eventually you’d run out of memory if you did this repeatedly.

Because of this, it always feels a little awkward in C to call break from a for loop, which is GOTO at the assembly level. Fortunately, C does a better job at stack management than BASIC.

Low memory addresses are faster

On the 6502, instructions that access zero page addresses (00 – ff) use a more compact instruction encoding than other addresses and also execute one cycle faster. In DOS, you may have spent a lot of time trying to swap things below the 1 MB barrier. On an Amiga, it was chip and fast RAM.

Thus, it always feels a bit faster to me to use the first few elements of an array or when an address has a lot of leading zeros. The former rule of thumb has morphed into cache line access patterns, so it is still valid in a slightly different form. With virtualized addressing, the latter no longer applies.

Pointer storage is insignificant

In the distant past, programmers would make attempts to fold multiple pointers into a single storage unit (the famous XOR trick). Memory became a little less scarce and this practice was denounced, due to its impact on debugging and garbage collection. Meanwhile, on the PC, segmented memory made the 16-bit pointer size insignificant. As developers moved to 32-bit protected mode machines in the 90’s, RAM size was still not an issue because it had grown accordingly.

However, we’re at a peculiar juncture with RAM now. Increasing pointers from 32 to 64 bits uses 66% more RAM for a doubly-linked list implementation with each node storing a 32-bit integer. If your list took 2 GB of RAM, now it takes 3.3 GB for no good reason. With virtual addressing, it often makes sense to return to a flat model where every process in the system has non-overlapping address space. A data structure such as a sparse hash table might be better than a linked list.

Where working set size is less than 4 GB, it may make sense to stay with a 32-bit OS and use PAE to access physical RAM beyond that limit. You get to keep 32-bit pointers but each process can only address 4 GB of RAM. However, you can just run multiple processes to take advantage of the extra RAM. Today’s web architectures and horizontal scaling means this may be a better choice than 64-bit for some applications.

The world of computing changes rapidly. What kind of programming practices have you evolved over the years? How are they still relevant or not? In what ways can today’s new generation of programmers learn from the past?