Boston Security Meetup - 2/8/2014

Jump to: navigation, search

These are my notes from the Feb 8, 2014 Boston Security Meetup, which was held at Microsoft's New England Research and Development Center.

NSA Spying Concerns? Learn counter surveillance!

Gary S. Miliefsky, Snoopwall

Counterveillance is the practice of being unseen. You could also describe counterveillance as being invisible, or being off the grid.

Encryption is a form of counterviellance. In some contexts, encryption is easily noticed, and can raise flags.

In the 1950's the CIA hired two magicians to teach them about counterviellance.

In the home, we have "smart" televisions that are equipped with webcams; these webcams are hackable. The X-box has a digital camera which watches as you play games; it's capable of facial recognition.

Verify first. Trust never.

There's a congressional mandate to equip all vehicles with event data recorders.

Companies have filed patents for camera sensors in restroom faucets in toilets.

Worth reading: The Official CIA Manual of Trickery and Deception.

Here's a challenge, how do you buy a book anonymously. (One suggestion offered by the audience: use bitcoin, and have the book drone-dropped in the middle of field. Another suggestion was to use cash at a local independent book store).

Target (the store) should have practiced counterveillance. lists all US data breaches. To date, there's been over 630 million records breached. By the numbers, we've all been affected twice.

Misdirection matters. Want to see who's interested in your network? Set up a honeypot. Let people eavesdrop on your honeypot. Mislead them, and eavesdrop on what they're doing.

Be aware of your surroundings. If you want to carry on a private conversation, do it in person, and pay attention to who's standing nearby.

Miss Teen USA had her smart phone attacked. The attackers activated the camera, and used it to take pictures of her coming out of the shower. The attacker was identified, and charged with COPPA violations, for posting naked pictures of her.

Counter counterveillance technologies: disposable phones, home firewall management, bug detection devices. Someone discovered an automobile window tint that blocks cell phone signals.

Counterveillance toys: spoof card, hidden camera detectors, laser microphone blockers. Look around the internet and you'll be able to find this stuff.

Encryption is a good first step.

Use covert channels to make the exchange of data invisible. For example, ssh over ICMP, or ssh over port 80. Transmit data where folks aren't likely to look for it.

Decentralize your data. Store pieces in different locations. Use peer to peer networking.

Supply chains matter (e.g., the NSA's program of package interception and tampering).

Exercise: look through CVEs. How many can you find that describe vulnerabilities in CISCO device components?

Read (STIGS). STIGS stands for Security Technical Implementation Guides.

Analyze your network traffic. Someone else is probably analyzing it too.

Coca-Cola is probably the best at counterveillance. Their recipie for Coca-cola syrup is still secret.

If you don't need it, don't store it.


A Cyber Law How-to Guide

Andrew Levchuk, Bulkley Richardson

How do you prevent data from being exfiltrated on USB drives? Fill the USB ports with Elmer's glue.

Prosecution of white collar crimes is heavily based on intent. This is one of the things that makes white-collar crime difficult to prosecute.

What are computer crimes? A lot of it comes down to authorized access. For example, "hacking in" involves obtaining access to resources that you are not permitted to see or use. Computer crimes can also involve exceeding levels of authorization.

Law enforcement has no special rights to protected data. When it comes to access to protected data, law enforcement has the same rights as everyone else. That said, FISA authorizations generally exceed regular law enforcement authorizations.

Many computer crimes are modern-day equivalents of more traditional crimes. For example, identity theft has been around a lot longer than computers have.

The Computer Fraud and Abuse Act (the CFAA, 18 USC 1030) was used against Aaron Schwartz. The CFAA applies to people who (1) access a computer without authorization, and (2) use that access to do specific things. Examples of CFAA violations include hacking into computers, routers, or ATM machines, snooping on network traffic, authorized access to email, defacing websites, and denial of service attacks.

Legal suggeston: have a (login) banner stating that unauthorized access is prohibited. Having such a banner prevents the "I didn't know" defense.

A "protected computer" is any computer that's operated by the government, or any computer that's used in interstate commerce. That covers a lot of computers.

CFAA also applies to transmission of code, with the intent to cause damage; and damaging a computer with the intent to extort money.

In the Aaron Schwartz indictment, he was charged with breaking into a restricted-access network closed, and with unauthorized access to a computer network.

The Wiretap Act (18 USC 2510) covers interception of wire, oral, and electronic communications. This includes acquisition of traffic, listening in on telephone calls, and capturing email. Law enforcement needs special warrants to obtain information that would be covered by the wiretap act.

Would the wiretap act cover monitoring of traffic from a Tor exit node? For government access, it would depend on where the exit node is located (US vs non-US). Private interception of exit traffic would fall under the wiretap act.

Security for the coming vehicle system

William Whyte

In 2012, there were 32,000 people killed in automobile accidents. Vehicle warning systems may be able to significantly reduce this number. A one-second warning could turn a fatal accident into a non-fatal one.

Proposed vehicle-to-vehicle (V2V) systems are based on 802.11p standards. The frequency spectrum for this technology was reserved in 1999.

Warning systems broadcast a Basic Safety message (BSM) 10 times per second. This message contains information about the vehicles location, speed, and direction of travel.

The NHTSA (National highway and transportation safety administration) has been working on regulations that would mandate V2V warning systems. The proposed regulations should be ready in 2014, but they probably won't go into effect until 2021 or 2022. Please read the proposals when they come out, and respond to USDOT with your concerns. It's important to get this done right, and we want to hear feedback.

We don't want V2V systems to be used for tracking or surveillance.

Some of the technical challenges we're facing: bandwidth bottlenecks, the cost of technology (it has to be inexpensive enough to include in every car), and limits on device processing power.

V2V messages are signed with ECDSA. The messages have 64-bit signatures and 32-byte private keys.

Vehicles only need to verify signatures when they intend to take action on a specific message. Warning systems won't verify signatures when no action is required.

Certificates contain authorization information, but not identification information. A car will have different certificates for different applications. Vehicles will be issued 20 or so certificates at a time. Vehicles will be able to choose which certificate they use at any given time, and how often to change the certificate they're using.

No single device will know the full set of certificates associated with any particular vehicle.

Cars will get an enrollment certificate. The enrollment certificate will be used to obtain a series of "pseudonym" certificates. The pseudonym certificates will be used to sign V2V warning messages.

We plan daily publication of misbehaving certificates, via certificate revocation lists.

The certificate issuing system has a "registration authority". The registration authority is a middleman. The registration authority gathers signing requests, shuffles them, and submits them to a signing authority. The certificate authority won't know who created the signing requests, or who the issued certs belong to.

The V2V system will use Butterfly keys. The device will generate a seed key and an expansion function. The expansion function generates additional keys (from the seed key).

In general, the certificate system uses a lot of random generation and shuffling of certs.

Correlating Behaviors

Tom Bain, CounterTack

Everything is a behavior. Behavior matters in the security industry. Do you understand your employee's behavior? Do you understand your attacker's behavior?

Who are your internal actors? Your external actors? What are acceptable behaviors? What are high-risk behaviors?

What kind of outcomes are you trying to achieve with your security policies?

Context matters when studying behaviors. What contexts to the behaviors occur in?

The human element is the easiest to compromise and manipulate.

Can you map behavioral analysis to security objectives?

A couple of trends worth noting

  • 29% of attacks use some form of social engineering
  • 72% of compromised assets exist at the user end (e.g., on laptops or phones)
  • 70% of IT theft occurred within 30 days of people leaving their jobs.

Examples of risky behavior

  • Employees who don't notify their employer about data loss
  • Personally-owned devices on corporate networks.

How do you get employees to be more security aware? How do you motivate employees to be more security conscious?

It's important to have visibility into an attacker's behavior, while the attack is occurring. Learn from their behavior, and apply their behaviors to your own security policy. Know the techniques that attackers are using.

What behaviors really matter? In other words, what behaviors create the most risk?

Given new (previously unobserved) behavior, how do you assess the level of risk associated with that behavior.

If you know the attacker's motivations, you'll be able to make educated guesses about what they're going to go after, and what they might try to do.

Elicitation is the subtle extraction of information during an apparently normal and innocent conversation.

Policy statements should contain enough information to guide decisions of non-technical people. This is hard to do. For example, if your policy is "never open email from suspicious sources", then you'll have to explain what a suspicious source is.

Training helps, but you have to tailor traning to different roles, and to different tasks. A person taking a training course has to understand how it's relevant to their day-to-day activities.

Blitzing with your defense

Ben Jackson

When it comes to security, we are in a "prevent" defense. In football, a prevent defense is where you try to block really big plays. Much of our security defenses work like this.

The traditional incident response model treats each incident as a separate event. This model isn't suitable for persistent attackers. Separate incidents might be independent, or they might be linked.

Good paper on this idea: "Intrusion kill chains" by Hutchins, Cloppert, and Amin.

The NERC HILF report is also good reading. NERC is North American Electrial Reliability Corporation, and HILF is High Impact, Low Frequency. ( might be the report).

There's a concept called "attacker free time". This is the period between when an exploit occurs, and the containment of that exploit. During this period, attackers can do whatever they want. If there are several attackers, you can have overlapping windows of attacker free time.

Data is just data. Intelligence is meaning derived from the analysis of data. For example, the Death Star plans were data; the exhaust port vulnerability was intelligence.

Easy stuff: What are we seeing, and when?

Hard stuff: what are they doing? What are they after?

Really, really Hard stuff: Who is doing this? Why are they going after that?

Gathering data is easy. Turning that data into intelligence is not.

When observing bad IRC channels, don't use devices that can be traced back to you. Due to the nature of IRC, newcomers are easily noticed. It's hard to exfiltrate data from IRC.

Some groups like to boast about their exploits on social media. Thus, you can keep tabs on some attackers by monitoring social media. Social media makes this kind of monitoring very easy.

Pastebin is a great source of data.

Search engines provide information about what attackers are planning. Search for your company name, domain names associated with your company, the name of your CEO, your IP address ranges, and so fourth.

If you decide to go undercover, you'll need a believable cover identity. is a good place to start, but that's just the beginning. Cover identities cannot be created, they have to be grown and cultivated. Start cultivating your cover identities well before you need them.

Cultivating your cover identity can be fun, but you have to be consistent with real world facts. For example, if you claim to have been a member of Delta house, then you'll need to know that Delta house had their charter revoked from 2009-2010. Those kinds of facts are covered in the media, and someone that decides to research your cover identity can find them.

Never cross-contaminate your cover identities.

If your cover identity does social media posting, be sure those posts are made at appropriate times. In other words, a person in the UK will likely post at different times than a person on the US west coast. Your posting times should be consistent with where your cover id purports to live.

Your company can benefit from having fake employees. Leak their names, and see who tries to contact the fake employees. This can allow you to discover who's looking into you.

Enterprises tend to use similar systems: CISCO firewalls, Active Directory, etc. Attackers know how to get into this stuff. But you can still slow attackers down, and make their job harder. Set up some internal-facing honey pots, and see who breaks into them. Install decoy services on underutilized hosts. Put "interesting" files on public shares (for example, a password-encrypted file called "" that's filled with 12GB of /dev/urandom).


Insider threats in the software supply chain

Brenton Kohler, Cigital

Fortify was one of the first static code analysis tools, and it was written by Cigital.

Malicious code: insider threats inserted into production code.

There a distinction between code that is intentionally malicious, and code that is unintentionally malicious. Back door are the most common form of malicious code.

Think of Bob from Verizon, who was outsourcing his programming tasks to developers in India and China. What kind of malicious code could those developers have inserted?

Salami fraud is an old type of crime. It involves making skimming a couple of cents (or fractions of a cent) on transactions. Think of "tweaking" a gas pump so you make a couple of extra cents on each tank of gas. This kind of fraud is easy (or easier) for insiders to accomplish.

A vulnerability is a bug that's there by mistake. Malcode is a bug that was created by development or IT, or someone else with the ability to create new attack surfaces.

Input to malcode detection: design documents, code, binaries, build files.

Output from malcode detection: a list of suspicious stuff, from static and human analysis.

Context matters. Suppose you find exploitable code. Is it there because the developer goofed, or because the developer was malicious.

It's easy to write code signatures to detect malicious code. Figuring out the intent of that code is harder.

Malcode detection is most effective when performed on copies of what's deployed in production. For example, the malcode might be inserted by a release engineer rather than a developer.

Suppose you find malcode in one of your applications. How do you fix it, without tipping off the (malicious) developer who wrote it? You could go to a trusted developer, and ask them to fix it. You could allow the malcode to run, and monitor what it does. You could actively thwart the malcode (for example, if the malcode makes connections to, then set up an ACL to block traffic to

Pharmaspam at a .edu: DDOS by an unlikely culprit

Patrick Laverty, Akamai

Brown University was DDOSed by google, due to some SEO chicanery by the online pharmacy folks. Their web site was hacked, such that each search engine crawler request was redirected to a page about web pharmaceuticals. Brown university started showing up in search results for cialis, viagria, and other medications.

Webshell: an application that runs in a web server, and allows you to do many things that would normally require shell access. Often implemented as .php scripts. If you can plant a webshell on a target system, you own it.

Basic recipie

  1. Use google to find vulnerable web sites
  2. find an entry point
  3. plant a web shell
  4. Lulz

SQL injection is an easy attack vector. Lax permission on the web server makes this a much bigger problem. Lax permission on the web server and the database make this a very big problem.

PHP has a passthrough() function. Very dangerous to enable this on websites. exec() is another function that's dangerous to enable.

Suppose you find an SQL injection point. With suitably lax permission, you can do things like this:

select ... into outfile '/var/html/foo.php'

If you can write foo.php to the filesystem, and the web server is willing to run it, then you can do whatever you want.

Ways to prevent this kind of attack:

  • Limit the set of directories that the web server can write to. If the web server doesn't absolutely need write access, then it shouldn't have it.
  • Limit database permissions given to web applications. In particular, do not give the (MySQL) FILE permission to web applications. Also, the user id of database process should not have permission to write to web site directories.
  • Use parameterized queries and prepared statements. This will prevent SQL injection attacks.

The future of smarter monitoring and detection

Mark Ellzey, Threat Stack

Linux auditing systems provide you with visibility into kernel internals. This communication typically takes place over some kind of socket connection.

Threat stack provides an improved version of Redhat's auditd. They wanted a better version of auditd, with better filtering capabilities, more scalability, fewer bugs, more modularity, and fewer global variables.

Reducing Uncertainty by Managing Risk

Christian Ternus, Akamai

Risks are the known unknowns. Uncertainty is the unknown unknowns. Uncertainty is the risks we don't know about. You can quantify risk, but you really can't quantify uncertainty.

As a security professional, one of your goals is to turn uncertainty into risk.

A black swan is a highly improbable event. Humans are not very good at dealing with small probabilities. We tend to round them down to "probably won't happen". But they do happen, and the usually have a big impact. As security professionals, we have to figure out how to deal with these highly improbable events. We have to actively explore unknown unknowns, expand our perspectives, and share knowledge.

Question: if you have low probabilities, couldn't you make them more understandable by changing scale?

We can create models, but it's hard to create good models.

Question: Traditional risk models are based on (probability x cost). Would you change these models?

It's hard to assess probability in advance. The probablities are small, and the error bars tend to be big.

Question: What about the principal agent problem?

Different people have different value calculations, and they'll make different decisions. I'm interested in ways that the IT community can frame risk, using ways that non-IT industries frame risk. For example, what can we learn from airlines, the nuclear industry, and so fourth.

Other Stuff

See - the Open Organization of Lockpickers.