Counter-attack stories come and go, but this time it’s supported by the courts. The question was whether the defendant’s 4th Amendment rights against unreasonable search and seizure were violated by the campus system administrator logging into his dorm computer without authorization. Campus police and the administrator then followed up by visiting the dorm room and gathering evidence.
The 9th Circuit Court ruled that the physical search was not justified, but since the same evidence was found independently by the remote search that the court ruled was justified, it did not violate the defendant’s 4th Amendment rights and was admissible.
Electronic access was justified, under a “special needs” exemption. The administrator supposedly went against Qualcomm requests to wait for an FBI warrant and didn’t make an effort to collect extensive evidence (or create/delete files), reinforcing the point that it was not an evidentiary search. The claim that he was acting to protect the campus email server (and not on behalf of stopping the Qualcomm intrusion, which was outside campus jurisdiction) also helped.
The conclusion is that “requiring a warrant … would disrupt the operation of the university and the network that it relies upon to function.” This seems like a very weak claim. The administrator had already successfully blocked the connection to the email server once, and could presumably have put in a firewall rule blocking all SSH (or backdoor) TCP connections to the email server from the dorms.The ruling presumably quotes his testimony that the “user had obtained access … restricted to specific system administrators, none of whom would be working from the university’s dormitories.” The IP addresses the attacker used were referenced only by the last 8-bits, indicating a simple class-C filter rule would have been sufficient. Better yet, block all Internet access by disabling the Ethernet port on the switch the attacker connected through.
A member of the University of Wisconsin IT staff posted a curious commentary along these lines:
If you hack University servers from your computer (or even if the computer is being used a zombie), and then take steps to hide your identity or otherwise conceal your activities, your network access will be removed, such removal will be actively enforced and verified, and any immediate actions required to protect the security and integrity of the University network and computing resources will be taken.
In other words, we can and will block your access. But then later,
Academic, legal, and possible criminal action will then follow, as warranted. These were exigent circumstances, and not done under the guise of law enforcement, but rather the protection of critical university resources from activities clearly and explicitly disallowed by numerous University information technology, housing, academic, and general policies (not to mention various federal and state laws).
Sorry, but if you’re so capable of removing, enforcing, and verifying network access, why was a raid of the dorm room by the administrator and campus police so urgent that a warrant couldn’t be obtained? They can’t say “because he might destroy evidence since he probably knew we’d detected him” without treading into the waters of this being a law enforcement action, and thus an illegal search. So they try to have it both ways, claiming that the electronic and subsequent physical search were necessary to take immediate action to protect the email server, and not for evidentiary reasons, while their previous actions showed that they were fully aware and capable of preventing the attacker from accessing the email server via filter rules.
The court also ruled that computers and the data they contain are considered private, even when attached to the university network. However, connecting a computer to the network implies assent to the network owner’s policies (even more vague an action than click-through licenses since it’s not clear you have the policy in front of you when you plug into a network jack.)
The university should have had a monitoring clause in their computer policy. Instead they had limitations on access to data (“[i]n general, all computer and electronic files should be free from access by any but the authorized users of those files.”) This helped reinforce the point that his computer could be considered private.
Finally, if you’re a hacker:
- Don’t hack hosts on your local network. The more entities (read: bureaucracy) you can layer between you and the target, the better.
- Don’t do activities from your computer that identify you (check email, log into your legit account, etc.) while hacking.
- Don’t hack from your own computer or any computer remotely associated with you.
If you’re a security vendor:
- Core Impact just got a whole lot more valuable. Ivan, can we have a “just looking” mode that gathers info without touching anything that looks private?
- NAC doesn’t solve the problem of taking a set of connections and tracking it back to a port. Who’s going to help administrators like this who have problems installing firewall rules?
3 thoughts on “Bright future for counter-attacks”
They obviously have a “good” lawyer and were very lucky with this judge. If they logged onto his computer remotely instead of just blocking the port when it could be proven that they can, then their argument does not stand up. He should have a chance at a civil case against the University.
Now, if they can prove they have no way to block the port or the traffic with any manner of security measures, then they should be held partially responsible for the intrusion as well.
Now the other problem is… This is now a precident.
One thing to remember is that this was quite a while ago. I am certainly not defending anybody’s actions (not at all, don’t even think that I am), but network security has come quite far since this whole thing started. Firewalls? Yeah, a reality now, not as common then. The server itself was AIX, and AIX didn’t have host-based firewalling until last year (2006). There were certainly other methods available, though, like blackhole routes, ACLs, wait for the FBI, etc.
Another thing to consider is common carrier law. The UW-Madison, as with many institutions, walks a narrow line between the pressures of keeping an orderly, useful network and losing their defense as a “common carrier” for policing their network. The UW-Madison is doing some good anti-RIAA work in this regard (despite what the Wired article implies), and the policy is a lot more clear now. When Heckenkamp was terrorizing servers at the UW the policy was a lot less clear. Even law enforcement was relatively clueless about electronic issues back then, as compared to now. Getting them to do something was very, very hard, because they just didn’t understand.
In this particular case both sides are equally dumb, for the lack of a better word. Heckenkamp couldn’t be bothered to use common sense, like any of the points you outline above. Jeff Savoy gave Heckenkamp’s defense attorneys enough ammunition for years of appeals. I don’t like the idea that anybody was lucky with this judge. Justice isn’t about luck, it is about proof (yeah, yeah, I know how it goes sometimes). Heckenkamp left a massive trail of electronic destruction, though, and it isn’t hard to imagine that there was enough of it to point to him definitively. It was the collection of some of it that was in question, and almost ruined it.
I can imagine that there are a lot of folks, law enforcement and others, that breathed a sigh of relief at this ruling. Many people put a lot of time into collecting evidence only to have it jeopardized by a vigilante. That’s why I have a hard time believing this will be a precedent. It worked this time, barely, but when you work hard on something you don’t want your time to be wasted. I can envision future situations where the vigilante finds themselves fighting both the hacker and the law enforcement, with the vigilantes ending up in jail, too. Vigilantism will certainly not be a corporate policy, either. People want convictions, not eight year protracted legal battles.
Bob, nice thoughts. I didn’t notice that this was a 7 year+ battle. The biggest hole in the case for the electronic/physical search being necessary to protect the email server is that the admin demonstrated he had the ability and means to block the attack. Changing IP addresses within a class C is still well within the range of IP filtering. The attacker could then just log into a middle box outside the school to get around it. So patching the hole(s) would be a smart parallel activity.
The nice thing about it being an email (and not shell) server is it’s likely the permitted services would only be SMTP and POP/IMAP (maybe +SSL, but not likely). This gives a lot of flexibility in filtering and patching to keep the system running while locking out an ongoing attack.
UW-Madison staff should be sentenced to play CTF at Defcon every year.
Comments are closed.