Are Your Smartphone Apps Leaking Personal Information?
For many years now, we have been trained to look for the SSL icon on our browsers, and have told our family and friends to do the same. It’s a basic security measure that ensures our traffic cannot be intercepted and spied on. The last thing we want when doing online banking, reading our email or shopping online is having someone sitting there and watching our every move, storing our email contacts, credit card information, and so on. On desktops and laptops, this problem has been solved a long time ago, and through constant education of the public, we’re at a point where most people know what the little yellow lock icon means next to the URL bar, or to at least look for HTTPS instead of just HTTP in the web address.
When it comes to mobiles however, things are different. In a mobile browser environment, the lock icon can still be seen, and most mobile browsers do a good job in making sure we are aware whether our communications are secure. But what happens in the case of applications? Most of the time, apps do not have a lock icon or a URL displayed for the user to see. Yet we use apps for all kinds of critical processes like banking, communication, shopping and more. On top of that, our smartphones have a lot more information about us, since they have access to our contact list, our calendar, GPS location, etc… How can we know whether the apps we use are using encrypted connections back to their servers? It can be dangerous enough for a single user, but in the case of a business, with threats of corporate espionage and the personal information of clients potentially in your care, things can escalate quickly.
Right now, most people who use a smartphone have no way to find out whether or not the apps they use are secure, and they have to trust the developer when they claim that their apps are safe. Unfortunately, this turns out to not always be the case. In order to find out whether or not any particular app is using SSL connections, it can be useful to know the security models that are available to those developers, and then some tools that can be used in order to monitor the apps and know the real answer. Last October, a team at Leibniz University in Hannover went deep into the Android core to find out how SSL gets implemented, and how easy it is for app developers to make sure their apps are secure, leading them to publish a report on the subject.
The team analyzed Android apps because the Google platform is the fastest growing smartphone platform in the world. In addition, in their end of year security bulletin, Kaspersky said that they noticed the vast majority of mobile threats out there are going after the Android platform. Some of the results researchers found were fairly interesting. First, there is no way for a normal user to know what type of security a specific app provides. The permissions dialog shown to a user when installing an Android app is fairly useless, since a lot of apps go overboard with the permissions they ask for. Nearly every type of app will ask to be able to communicate over the network, and that says nothing about what is actually sent, and whether it is done in a secure way.
From examining app behavior and doing static code analysis, they found a total of 1,074 apps, or 8%, which used SSL code that allowed all hostnames or all certificates when connecting, which means they are potentially vulnerable to man in the middle (MitM) attacks. During further analysis, 41% of these apps did in fact end up being vulnerable to various types of MitM attacks. This means that tens of millions of current Android users are using applications that are vulnerable due to bad SSL implementations. The team managed to get credentials for bank accounts, PayPal accounts, credit cards, Twitter, Facebook, Google and so on. They even managed to inject virus signatures into an antivirus app and disable it completely.
Some of the biggest errors that app developers made were to configure the TrustManager interface to trust all certificates, forget to check hostnames when establishing a secure connection, and using mixed modes where both encrypted and unencrypted data are displayed on a single page. Also, it’s worth noting that Android 4.0 trusts 134 root Certificate Authorities (CA), which is quite a lot.
So how can you figure out whether the apps you use are safe? After all, as a smartphone user, you may not care much how the app developer went about to implement his or her networking code, but whether or not the app is safe. The way the team in Germany did it was by doing static code analysis using a custom plugin for Androguard, a tool made for the analysis of Android applications. By scanning the code itself, they could see how the apps interacted with the underlying APIs, and how the network code worked. This way, they could see where the errors were, and why something might not be secure.
If you can’t do this type of deep analysis, there are many other ways you can get a good idea of whether or not the apps you use are safe. One place you can look at is ZAP, the Zscaler Application Profiler, which gives you a web based interface to scan Android and iOS apps. There, you can search for past results, scan apps by using their scanning proxy, or upload apps to the site. The result gives you information on whether the authentication is secure, if there is any data leak, and whether there is exposed content from analytics or ad networks.
You could also make your own proxy and use an app like ProxyDroid, which can force other apps to use SSL, or simply use a VPN in order to remove any risk that a MitM attack on your local network will get anything out of your apps, if you happen to be in a hotel or some other insecure wi-fi situation. By routing all of your phone traffic through your own network, you can also use a scanner like Wireshark to capture all the packets going by, and then physically look at the data to see whether it’s gibberish, as it should be, or if you can see some leakage. This will not tell you whether some bug in the code could allow a more active attack, but at least a passive listening of the wire won’t expose your data.
Of course, in the end, the burden falls on app developers to implement their applications in a secure manner. If you are making apps, perhaps for your business or even the wider world, then you need to make sure that if you implement SSL, you do it in the correct way. InfoSec Institute offers a web application security course and Android has a comprehensive guide talking about these topics. For iOS, the SANS Institute published a detailed coding tutorial on how to correctly implement secure connections and deal with potential MitM attacks.
Meanwhile, research is being done in other directions as well. Last month, Fujitsu revealed that they are working on an HTML5 based system that would allow workers to have access to corporate data based on whether or not they are in a secure environment. This system would use an app on the employee-owned phone that would detect if the worker is in range of an allowed wi-fi network, or when the worker taps a corporate NFC card to their phone. The company said this system would be released later this year, and everyone is looking forward to trying it out.
About the Author: Patrick Lambert is a security researcher for InfoSec Institute, an IT security training company.