Panasonic CF-Y4 laptop disassembly

I’m a big fan of the lightweight Panasonic ultraportable laptops.  The R-series is small but still usable.  The Y-series offers a full 1400×1050 screen, built-in DVD-RW drive, and long battery life in a 3 pound package.  As a FreeBSD developer, I also find the BIOS in the Panasonic and Lenovo/IBM laptops are mostly compliant, meaning suspend/resume and power management work fine.

Recently, I upgraded the hard drive on my CF-Y4.  I found that these disassembly instructions (another good source) for the CF-Y2 are mostly accurate.  However, there are a few caveats I wanted to note for others with the R/W/T/Y series laptops.

First, all the notes about 3.3 volt logic versus 5 volt logic for the hard drive no longer apply.  The Toshiba hard drive that came in my Y4 uses 5 volt logic, along with 5 volt motor supply.  In fact, the pins are tied together internally.  It was straightforward to swap in a WD 250 GB drive with no clipping pins necessary.  This may apply to the newer R-series as well, though I haven’t verified it.  If in doubt, use an ohmmeter to verify no resistance between pins 41 and 42 on the stock hard drive.

Next, heed the warnings about stripping the top two large hinge screws.  They screw directly into plastic, while the other two hinge screws have a steel sleeve.  Use a good jeweler’s screwdriver for the small screws.  You don’t need to remove the two screws that hold the VGA connector to the case.

When removing the keyboard, pry smoothly in multiple places but don’t be afraid to put a little effort into it.  The glue used to hold it down is surprisingly strong.  Be sure you removed all the small screws from the bottom, of course, otherwise it won’t pop out.

Be sure to clean the CPU’s heat sink connection carefully and use some good thermal paste when reassembling.  These laptops have no fan (awesome!) but that means it’s critical to make a good connection between the CPU and the keyboard heat sink area.  Also, don’t forget the GPU, which sinks heat through the bottom of the motherboard.  I cut a small piece of plastic to use as a spreader to eliminate any bubbles.  I also put a thin amount of paste along other parts of the internal skeleton where it touches the keyboard.  Once you reassemble the case, monitor the system temperature for a while to be sure you didn’t make a mistake.  I found my temperature actually dropped compared to the factory thermal paste.

TLS/SSL predictable IV flaw

Another attack that was addressed in TLS 1.1 results from a predictable initialization vector for encryption. This allows an attacker to verify guesses about previous plaintext and is an interesting example of how slight implementation variations in well-known cryptographic constructions can introduce exploitable flaws. Phil Rogaway and Bodo Moeller have some more detailed notes on such problems.

Remember that CBC encryption is a way of chaining multiple blocks together into a longer message. It first requires an IV to kick things off, then the encryption of subsequent blocks is made unique via each previous block’s ciphertext. Compare this to ECB, where each ciphertext block is independent and thus reveals information about its contents if plaintext is repeated.

An IV must have the following properties:

  • Unique: must not be repeated for any message encrypted with a given key
  • Unpredictable: an attacker who observes any number of messages and their IVs should have no information to predict the next one with probability of success greater than 50% per bit (i.e., indistinguishable from random)

Uniqueness is necessary because CBC devolves to ECB without it. It’s critically necessary for other modes of operation like OFB or stream ciphers where a repeated seed produces a repeated keystream, which is totally insecure.

Unpredictability is more subtle. The attack on TLS’s CBC IV is based on it being predictable, even though it was unique. More on that later.

Note that an IV does not have to be random. There’s a difference between computational indistinguishability and true randomness. Since you want some assurance that each IV is unique, it’s theoretically better to load an initial seed into a secure PRNG once and then generate only 2n/2 output bits before re-seeding it. If the PRNG is based on a secure permutation (say, a block cipher), you are guaranteed the sequence will not repeat if you limit the number of output bits before re-seeding. However, in practice, it’s also effective to continue feeding the PRNG entropy as it becomes available as a short cycle is extremely unlikely.

TLS’s record layer provides message boundaries for the application. Each message is typically encrypted in CBC mode if a block cipher like AES is being used. Each time a new message is sent, the last block of the previous message’s ciphertext is used as the IV. This means that an attacker observing the encrypted traffic knows what the next IV will be, even though it is unique/non-repeating.

The attack is simple. After observing a message, the attacker knows the IV for the next message will be ciphertext block Cn-1. Using this knowledge, the attacker can try to guess any previous plaintext block Px. He does this by constructing a plaintext block with the following format:

Pguess = Cn-1 XOR Px XOR Cx-1

Let’s break this down. The first item, Cn-1, is the known IV for the next message. Px is the guess for some previous block of plaintext, any will do. Finally, Cx-1 is the original block of ciphertext before our guessed block of plaintext. We know based on the operation of CBC that Px was chained with this value.

When Pguess is encrypted, the IV will cancel out (A XOR A = 0), leaving:

Cguess = ENCRYPT(Px XOR Cx-1)

As you can see, if the guess for was correct, the ciphertext Cguess will be identical to Cx. If the guess is wrong, the ciphertext will be different. This attack may be unrealistic in scenarios where the attacker cannot submit plaintext to the same TLS session as the target. However, this is feasible in shared connections such as a TLS/SSL VPN.

The important lesson here is that both uniqueness and unpredictability are vital when using IVs.

TLS/SSL MAC security flaw

Following my recent posts on TLS/SSL security, I gave a talk (slides are here) on a security flaw in the record layer that was fixed in TLS 1.1. The last page of my slides gives some interesting links if you’re interested in understanding SSL security better.

This flaw (found by Bodo Moeller) is in the use of padding as part of the integrity protection of the actual data being exchanged. Padding is needed because block ciphers encrypt data in chunks and something has to go in the remainder of the last block. This attack is particularly interesting because it allows an attacker to iteratively decrypt part of the message using side-channel leakage.

Side channel attacks are still often neglected, despite proof that they can be performed over the Internet. System designers always seem to have the same initial response when learning about timing attacks: make the computation time constant by adding a calibrated delay. When problems in this strategy are pointed out, their next move is to add a random delay after the computation (not blinding).

This usually repeats with each approach getting shot down until they eventually admit this is a hard problem and that appropriate measures need to be integrated with the actual process (not bolted on) and carefully evaluated for unforeseen problems. For example, one fix for this attack is to always compute the MAC even if the padding is incorrect. However, the logic path of noting that the padding is incorrect but continuing anyway still requires a conditional branch, which creates a small but observable timing difference that can be used in a successful attack.

Preventing side channel attacks is a difficult problem. If confronted with them, take the time to get your countermeasures carefully evaluated.

Ptacek vs. Lawson: 2007 predictions revisited

You’ve just finished opening your seventh corporate calendar gift. You’re ten pounds heavier. What better way to celebrate 2008 than revisiting our predictions from last year ?

Nate: Predicted! 99% of spam comes via image attachments

[N] Wrong. I do get lots of image spam and PDF attachment spam was new in 2007, but the lack of “clickability” limits the usefulness of this type. This year, I resolve not to make predictions about spam.
[T] I got more spam from Ron Paul supporters this year than I did from image attachments. I may be 6 months behind the times in calling this an ’07 result, but the bigger news in antispam seems to be the failure of Bayesian antispam filters. Remember when Bruce Schneier wrote that article calling antispam software one of the industry’s success stories? I’d regret that column today if I had written it. And, not that I think this blinding flash of inspiration makes me Kreskin or anything, but the other trend? Email is no longer the frontier of spam; online communities like Facebook are.
[N] Akismet is still a success story.

Thomas: Predicted! A New Mainstream Bug-Class

[N] Right, although a lot of the C++ stuff was already started last year.
[T] I’m giving myself a clean win here: 2007 was the year that C++ fell, in the mainstream, thanks largely to Mark Dowd and John McDonald. The bug class everyone seems to remember here is the delete/delete[] thing: because of C++’s asinine inability to distinguish an array from other complex objects (including vectors), you can lose a program to using the wrong delete operator. But the “rest” of the problems here are far worse. For instance, pretty much nobody has ever written a C++ program without an STL iterator bug. And Alexandrescu-style “modern” C++, which replaces pointers with smart pointer templates, creates memory lifecycle vulnerabilities every time data passes an API boundary. A huge chunk of our infrastructure was written in C++ in the mid-late ’90s, and until recently there was a mass delusion that C++ was safer than C. I don’t want to get into predictions for ’08, but, I just did.

Nate: Predicted! The “Month of X Bugs” meme fades out, finally

[N] Yay, right.
[T] Thank god. Least said, soonest mended.

Thomas: Predicted! A Year Of Cisco Vulnerabilities

[N] Wrong, no one is paying attention to networks right now. As I said, PC/Windows and shiny devices (iPhone) were what attracted researchers this year.
[T] I can’t claim to have nailed this prediction. But I’m not so sure of your policework there, Nate. Nobody is paying attention to IOS vulnerabilities? That’s not what’s holding back the flood: the finger in the dike right now is the fact that few people can find bugs in IOS. How many skilled vulnerability researchers are there in the whole industry? Oh wait: we figured that out two years ago — a good SWAG guess is 1,000. Of 1,000, how many can do low-level C vulnerabilities? A generous half? Of those 500, how many read assembly fluently? Half again? Of those 250, how many have the time and inclination to reverse undocumented embedded operating systems? If there are 100 people in the world who are currently IOS-qualified researchers, I’m shocked.
[N] You mention that skilled researchers are lacking, but I still maintain that is because they’re all focused elsewhere right now. FX, initiator of Cisco buffer overflows, was talking about bar codes this year.

Nate: Predicted! Apple follows OpenBSD, Linux, and Windows, by adding OS hardening features

[N] Right, Leopard did although their ASLR needs some improvement. Also, they threw in a weird userland firewall implementation that no one expected.
[T] Swing and a miss! I grudgingly concede this prediction to you; they did add, uh, “stuff”. But it’s a huge mixed bag, and if you just look at the places where they followed OpenBSD and Windows, they failed decisively. Whatever the Wikipedia editors might want to say, Leopard ASLR is broken and irrelevant; a shellcode tweak speedbump at best. On the other hand, Apple is blazing a new trail in MAC and program sandboxing; the TrustedBSD extensions they’ve provided to lock programs into OS capabilities appear strong, and could finally give OS X a real security advantage over Win32, if Apple handles them well.
[N] You conveniently overlook the fact that I didn’t claim OSX would be more secure than Vista after the changes, only that they would add similar features. The MAC layer is already present in Darwin, just not enabled by default. It will also be interesting to see if they can do it [Allow?] in a less annoying [Allow?] way than Windows [Allow?].
[T] It’s interesting that the most effective Windows security solutions are the behind-the-scenes runtime improvements, and the most effective Apple security solutions are design-level changes. Oh, wait, no, that isn’t interesting.

Thomas: Predicted! Bruce Schneier Will Not Score A New York Times Op-Ed

[N] I’m wrong also. Schneier did not make the move to tamper resistance, but attackers did enter crypto in a big way. Xbox hackers used timing attacks against the 360, and the Mifare stream cipher was reversed with hardware techniques.
[T] This prediction was wrong just days after I made it; Schneier got an op-ed on the airport security CLEAR program on January 21. Schneier gets steadily less relevant to hard skills security every year, but I’ll make a 2009 prediction: he’s going to be angling for a role in politics.

Nate: Predicted! Zero-day exploits in client apps like Office outnumber researcher advisories

[N] Wrong. It looks like Microsoft themselves are finding the most bugs, as should any company that cares about security.
[T] Zero day clientsides increased in ’07, but organically, not exponentially. I call this a miss.

Thomas: Predicted! Drastically Fewer Windows XP/Vista Vulnerabilities

[N] Easy gimme for you. But I was also right in that 3rd-party signing would prove ineffective (example: Joanna’s ioctl flaws found in common signed drivers).
[T] I give myself no credit for predicting this. You only have to make one assumption to figure this out: money buys improved security. Nobody in the industry spends as much as Microsoft on software security. Nobody spends more directly, on third-party software security testing. Nobody spends more internally, on full-time security practitioners, researchers, engineers and trainers. And nobody spends more indirectly, bearing the cost of improved security in every stage of their release cycle. My company probably does less Microsoft work than any other top-tier independent consultancy, but you can call me out for a conflict of interest here. I repeat and amplify this prediction for 2008.

Nate: Predicted! Content producers strike back: broadcast flag legislation passes and allofmp3.com shuts down

[N] Wrong, but Germany did outlaw “hacker tools”.
[T] Here’s what I think: either Macrovision is going to step up and make Blu-Ray’s BD+ scheme a success, and we’re going to have hundreds more crappy DRM schemes, or the critical mass of studios backing off on DRM is going to result in the end of software protection. In a way, it’s too bad: software protection is a fun problem, and one of the few (maybe spam is the only other) where each side of the fight is so evenly matched. I’m watching BD+ in 2008, and I’m not telling you who I’m rooting for.
[N] The big news for 2007 is that the battle for music DRM is over. MP3 (FLAC actually) wins. I’ve refused to buy music online until I can get it in a non-lossy format. It’s too early to predict an outcome for high-def movies, but it seems already obvious that revocation alone is a bad strategy. I’m shying away from making a prediction here due to conflict of interest (I’m a co-designer of BD+) but I will say that in 2008 studios will see the value in a system that requires continual effort by hackers to break each disc versus one that doesn’t.
[T] My siblings don’t share your hatred of DRM; I don’t think Steve has ever asked himself, “what would Nate do?” (people at Matasano do all the time, though).

Thomas: Predicted! TSA Starts Checking Software On Laptops

[N] Wrong, but they did start checking lithium batteries as I hinted.
[T] I retain this prediction for ’08. If you had asked me last year, “which is more likely: a TSA malware screening of laptops due to a scare about wifi and software radios interfering with avionics, or a blanket ban on a phase of matter”, I would not have predicted the ban on the phase of matter.

[N] In summary, both of us got two right. None of our far-reaching predictions came true.
[T] I was right about the bug class. We’ll be dealing with that one for the next 5 years.
[N] We also did something different in terms of giving counter-predictions in response.
[T] I got four counter-predictions right (anti-spam — though I did not anticipate the Paulbots, Month-of-X-Bugs, Apple, and Office zero-days).
[N] I got two right (no IOS hacks, crypto attacks mainstream).
[T] I’m apparently the better predictor, but only when I’m disagreeing with someone else.
[N] I disagree?

Avoiding Comcast BitTorrent blocking

Tonight I attended and spoke at the iSec Forum. My topic was recent flaws in TLS/SSL that were fixed in version 1.1. I’ll continue posting details about them here.

There was a good talk by Seth Schoen of the EFF on detecting RST-spoofing attacks by ISPs. He built a tool called pcapdiff that lets you compare client and server-side packet captures to see if someone is dropping your packets or spoofing new ones. This is what they used to catch Comcast blocking BitTorrent connections, among other things.

The approach Comcast apparently uses is to send TCP RST packets to both endpoints whenever the Comcast user’s BitTorrent client offers to seed a complete file. It doesn’t interfere with downloads, presumably because that would lose them a lot of customers. However, by preventing uploads once the download is completed, it prevents users from increasing their share ratio or offering new files for sharing.

I mentioned a simple countermeasure BitTorrent developers might use. Instead of announcing a complete seed, every client would announce a complete file except for a single chunk chosen at random. The random chunk index would be changed at a regular interval. That way, clients requesting a chunk would get it nearly all the time but the seed would never get blocked because it wasn’t complete. This behavior (hack?) could be disabled by default.

This is yet another example of the vantage point problem. Few system designers seem to understand its far-reaching implications. For background, see Ptacek and Newsham or Blaze. The latter summarizes it this way:

“There is unfortunately little room to make conventional loop extender interception systems more robust against these countermeasures within their design constraints; the vulnerabilities arise from inherent properties of their architecture and design.”

[Epilogue:  Azureus developers indicated to me that they have already implemented this option as “lazy bitfield“.  Additionally, they have a weak encryption option for peer chunk transfers.  However, neither of these have an effect on Comcast, who appear to be using Sandvine to implement this blocking.  Instead, they seem to be monitoring connections to the tracker and correlating them with bandwidth consumed by uploading.]