Krebs On Security
Federal prosecutors have charged three men with carrying out a deadly hoax known as “swatting,” in which perpetrators call or message a target’s local 911 operators claiming a fake hostage situation or a bomb threat in progress at the target’s address — with the expectation that local police may respond to the scene with deadly force. While only one of the three men is accused of making the phony call to police that got an innocent man shot and killed, investigators say the other two men’s efforts to taunt and deceive one another ultimately helped point the gun.
According to prosecutors, the tragic hoax started with a dispute over a match in the online game “Call of Duty.” The indictment says Shane M. Gaskill, a 19-year-old Wichita, Kansas resident, and Casey S. Viner, 18, had a falling out over a $1.50 game wager.
Viner allegedly wanted to get back at Gaskill, and so enlisted the help of another man — Tyler R. Barriss — a serial swatter known by the alias “SWAuTistic” who’d bragged of “swatting” hundreds of schools and dozens of private residences.
The federal indictment references transcripts of alleged online chats among the three men. In an exchange on Dec. 28, 2017, Gaskill taunts Barriss on Twitter after noticing that Barriss’s Twitter account (@swattingaccount) had suddenly started following him.
Viner and Barriss both allegedly say if Gaskill isn’t scared of getting swatted, he should give up his home address. But the address that Gaskill gave Viner to pass on to Barriss no longer belonged to him and was occupied by a new tenant.
Barriss allegedly then called the emergency 911 operators in Wichita and said he was at the address provided by Viner, that he’d just shot his father in the head, was holding his mom and sister at gunpoint, and was thinking about burning down the home with everyone inside.
Wichita police quickly responded to the fake hostage report and surrounded the address given by Gaskill. Seconds later, 28-year-old Andrew Finch exited his mom’s home and was killed by a single shot from a Wichita police officer. Finch, a father of two, had no party to the gamers’ dispute and was simply in the wrong place at the wrong time.
Just minutes after the fatal shooting, Barriss — who is in Los Angeles — is allegedly anxious to learn if his Kansas swat attempt was successful. Someone has just sent Barriss a screenshot of a conversation between Viner and Gaskill mentioning police at Gaskill’s home and someone getting killed. So Barriss allegedly then starts needling Gaskill via instant message:
Defendant BARRISS: Yo answer me this
Defendant BARRISS: Did police show up to your house yes or no
Defendant GASKILL: No dumb fuck
Defendant BARRISS: Lmao here’s how I know you’re lying
Prosecutors say Barriss then posted a screen shot showing the following conversation between Viner and Gaskill:
Defendant VINER: Oi
Defendant GASKILL: Hi
Defendant VINER: Did anyone show @ your house?
Defendant VINER: Be honest
Defendant GASKILL: Nope
Defendant GASKILL: The cops are at my house because someone ik just killed his dad
Barriss and Gaskill then allegedly continued their conversation:
Defendant GASKILL: They showed up to my old house retard
Defendant BARRISS: That was the call script
Defendant BARRISS: Lol
Defendant GASKILL: Your literally retarded
Defendant GASKILL: Ik dumb ass
Defendant BARRISS: So you just got caught in a lie
Defendant GASKILL: No I played along with you
Defendant GASKILL: They showed up to my old house that we own and rented out
Defendant GASKILL: We don’t live there anymore bahahaha
Defendant GASKILL: ik you just wasted your time and now your pissed
Defendant BARRISS: Not really
Defendant BARRISS: Once you said “killed his dad” I knew it worked lol
Defendant BARRISS: That was the call lol
Defendant GASKILL: Yes it did buy they never showed up to my house
Defendant GASKILL: You guys got trolled
Defendant GASKILL: Look up who live there we moved out almost a year ago
Defendant GASKILL: I give you props though you’re the 1% that can actually swat babahaha
Defendant BARRISS: Dude MY point is You gave an address that you dont live at but you were acting tough lol
Defendant BARRISS: So you’re a bitch
Later on the evening of Dec. 28, after news of the fatal swatting started blanketing the local television coverage in Kansas, Gaskill allegedly told Barriss to delete their previous messages. “Bape” in this conversation refers to a nickname allegedly used by Casey Viner:
Defendant GASKILL: Dm asap
Defendant GASKILL: Please it’s very fucking impi
Defendant GASKILL: Hello
Defendant BARRISS: ?
Defendant BARRISS: What you want
Defendant GASKILL: Dude
Defendant GASKILL: Me you and bape
Defendant GASKILL: Need to delete everything
Defendant GASKILL: This is a murder case now
Defendant GASKILL: Casey deleted everything
Defendant GASKILL: You need 2 as well
Defendant GASKILL: This isn’t a joke K troll anymore
Defendant GASKILL: If you don’t you’re literally retarded I’m trying to help you both out
Defendant GASKILL: They know it was swat call
The indictment also features chat records between Viner and others in which he admits to his role in the deadly swatting attack. In the follow chat excerpt, Viner was allegedly talking with someone identified only as “J.D.”
Defendant VINER: I literally said you’re gonna be swatted, and the guy who swatted him can easily say I convinced him or something when I said hey can you swat this guy and then gave him the address and he said yes and then said he’d do it for free because I said he doesn’t think anything will happen
Defendant VINER: How can I not worry when I googled what happens when you’re involved and it said a eu [sic] kid and a US person got 20 years in prison min
Defendant VINER: And he didn’t even give his address he gave a false address apparently
J.D.: You didn’t call the hoax in…
Defendant VINER: Does t [sic] even matter ?????? I was involved I asked him to do it in the first place
Defendant VINER: I gave him the address to do it, but then again so did the other guy he gave him the address to do it as well and said do it pull up etc
Barriss is charged with multiple counts of making false information and hoaxes; cyberstalking; threatening to kill another or damage property by fire; interstate threats, conspiracy; and wire fraud. Viner and Gaskill were both charged with wire fraud, conspiracy and obstruction of justice. A copy of the indictment is available here.
The Associated Press reports that the most serious charge of making a hoax call carries a potential life sentence because it resulted in a death, and that some of the other charges carry sentences of up to 20 years.
As I told the AP, swatting has been a problem for years, but it seems to have intensified around the time that top online gamers started being able to make serious money playing games online and streaming those games live to thousands or even tens of thousands of paying subscribers. Indeed, Barriss himself had earned a reputation as someone who delighted in watching police kick in doors behind celebrity gamers who were live-streaming.
This case is not the first time federal prosecutors have charged multiple people in the same swatting attacks even if only one person was involved in actually making the phony hoax calls to police. In 2013, my home was the target of a swatting attack that thankfully ended without incident. The government ultimately charged four men — several of whom were minors at the time — with conducting that swat attack as well as many others they’d perpetrated against public figures and celebrities.
But despite spending considerable resources investigating those crimes, prosecutors were able to secure only light punishments for those involved in the swatting spree. One of those men, a serial swatter and cyberstalker named Mir Islam, was sentenced to to just one year in jail for his role in multiple swattings. Another individual who was part of that group — Eric “Cosmo the God” Taylor — got three years of probation.
Something tells me Barriss, Gaskill and Viner aren’t going to be so lucky. Barriss has admitted his role in many swattings, and he admitted to his last, fatal swatting in an interview he gave to KrebsOnSecurity less than 24 hours after Andrew Finch’s murder — saying he was not the person who pulled the trigger.
Your mobile phone is giving away your approximate location all day long. This isn’t exactly a secret: It has to share this data with your mobile provider constantly to provide better call quality and to route any emergency 911 calls straight to your location. But now, the major mobile providers in the United States — AT&T, Sprint, T-Mobile and Verizon — are selling this location information to third party companies — in real time — without your consent or a court order, and with apparently zero accountability for how this data will be used, stored, shared or protected.
Think about what’s at stake in a world where anyone can track your location at any time and in real-time. Right now, to be free of constant tracking the only thing you can do is remove the SIM card from your mobile device never put it back in unless you want people to know where you are.
It may be tough to put a price on one’s location privacy, but here’s something of which you can be sure: The mobile carriers are selling data about where you are at any time, without your consent, to third-parties for probably far less than you might be willing to pay to secure it.
The problem is that as long as anyone but the phone companies and law enforcement agencies with a valid court order can access this data, it is always going to be at extremely high risk of being hacked, stolen and misused.
Consider just two recent examples. Earlier this month The New York Times reported that a little-known data broker named Securus was selling local police forces around the country the ability to look up the precise location of any cell phone across all of the major U.S. mobile networks. Then it emerged that Securus had been hacked, its database of hundreds of law enforcement officer usernames and passwords plundered. We also found out that Securus’ data was ultimately obtained from a California-based location tracking firm LocationSmart.
On May 17, KrebsOnSecurity broke the news of research by Carnegie Mellon University PhD student Robert Xiao, who discovered that a LocastionSmart try-before-you-buy opt-in demo of the company’s technology was wide open — allowing real-time lookups from anyone on anyone’s mobile device — without any sort of authentication, consent or authorization.
Xiao said it took him all of about 15 minutes to discover that LocationSmart’s lookup tool could be used to track the location of virtually any mobile phone user in the United States.
Securus seems equally clueless about protecting the priceless data to which it was entrusted by LocationSmart. Over the weekend KrebsOnSecurity discovered that someone — almost certainly a security professional employed by Securus — has been uploading dozens of emails, PDFs, password lists and other files to Virustotal.com — a service owned by Google that can be used to scan any submitted file against dozens of commercial antivirus tools.
Antivirus companies willingly participate in Virustotal because it gives them early access to new, potentially malicious files being spewed by cybercriminals online. Virustotal users can submit suspicious files of all kind; in return they’ll see whether any of the 60+ antivirus tools think the file is bad or benign.
One basic rule that all Virustotal users need to understand is that any file submitted to Virustotal is also available to customers who purchase access to the service’s file repository. Nevertheless, for the past two years someone at Securus has been submitting a great deal of information about the company’s operations to Virustotal, including copies of internal emails and PDFs about visitation policies at a number of local and state prisons and jails that made up much of Securus’ business.
One of the files, submitted on April 27, 2018, is titled “38k user pass microsemi.com – joomla_production.mic_users_blockedData.txt”. This file includes the names and what appear to be hashed/scrambled passwords of some 38,000 accounts — supposedly taken from Microsemi, a company that’s been called the largest U.S. commercial supplier of military and aerospace semiconductor equipment.
Many of the usernames in that file do map back to names of current and former employees at Microsemi. KrebsOnSecurity shared a copy of the database with Microsemi, but has not yet received a reply. Securus also has not responded to requests for comment.
These files that someone at Securus apparently submitted regularly to Virustotal also provide something of an internal roadmap of Securus’ business dealings, revealing the names and login pages for several police departments and jails across the country, such as the Travis County Jail site’s Web page to access Securus’ data.
Check out the screen shot below. Notice that forgot password link there? Clicking that prompts the visitor to enter their username and to select a “security question” to answer. There are but three questions: “What is your pet’s name? What is your favorite color? And what town were you born in?” There don’t appear to be any limits on the number of times one can attempt to answer a secret question.
Given such robust, state-of-the-art security, how long do you think it would take for someone to figure out how to reset the password for any authorized user at Securus’ Travis County Jail portal?
Yes, companies like Securus and Location Smart have been careless with securing our prized location data, but why should they care if their paying customers are happy and the real-time data feeds from the mobile industry keep flowing?
No, the real blame for this sorry state of affairs comes down to AT&T, Sprint, T-Mobile and Verizon. T-Mobile was the only one of the four major providers that admitted providing Securus and LocationSmart with the ability to perform real-time location lookups on their customers. The other three carriers declined to confirm or deny that they did business with either company.
As noted in my story last Thursday, LocationSmart included the logos of the four carriers on their home page — in addition to those of several other major firms (that information is no longer available on the company’s site, but it can still be viewed by visiting this historic record of it over at the Internet Archive).
Now, don’t think for a second that these two tiny companies are the only ones with permission from the mobile giants to look up such sensitive information on demand. At a minimum, each one of these companies can in theory resell (or leak) this information and access to others. On 15 May, ZDNet reported that Securus was getting its data from the carriers by going through an intermediary: 3Cinteractive, which was getting it from LocationSmart.
However, it is interesting that the first insight we got that the mobile firms were being so promiscuous with our private location data came in the Times story about law enforcement officials seeking the ability to access any mobile device’s location data in real time.
All technologies are double-edged swords, which means that each can be used both for good and malicious ends. As much as police officers may wish to avoid the hassle and time constraints of having to get a warrant to determine the precise location of anyone they please whenever they wish, those same law enforcement officers should remember that this technology works both ways: It also can just as easily be abused by criminals to track the real-time movements of police and their families, informants, jurors, witnesses and even judges.
Consider the damage that organized crime syndicates — human traffickers, drug smugglers and money launderers — could inflict armed with an app that displays the precise location of every uniformed officer from within 300 ft to across the country. All because they just happened to know the cell phone number tied to each law enforcement official.
Maybe you have children or grandchildren who — like many of their peers these days — carry a mobile device at all times for safety and for quick communication with parents or guardians. Now imagine that anyone in the world has the instant capability to track where your kid is at any time of day. All they’d need is your kid’s digits.
Maybe you’re the current or former target of a stalker, jilted ex-spouse, or vengeful co-worker. Perhaps you perform sensitive work for the government. All of the above-mentioned parties and many more are put at heightened personal risk by having their real-time location data exposed to commercial third parties.
Some people might never sell their location data for any price: I suspect most of us would like this information always to be private unless and until we change the defaults (either in a binary “on/off” way or app-specific). On the other end of the spectrum there are probably plenty of people who don’t care one way or another provided that sharing their location information brings them some real or perceived financial or commercial benefit.
The point is, for many of us location privacy is priceless because, without it, almost everything else we’re doing to safeguard our privacy goes out the window.
And this sad reality will persist until the mobile providers state unequivocally that they will no longer sell or share customer location data without having received and validated some kind of legal obligation — such as a court-ordered subpoena.
But even that won’t be enough, because companies can and do change their policies all the time without warning or recourse (witness the current reality). It won’t be enough until lawmakers in this Congress step up and do their jobs — to prevent the mobile providers from selling our last remaining bastion of privacy in the free world to third party companies who simply can’t or won’t keep it secure.
The next post in this series will examine how we got here, and what Congress and federal regulators have done and might do to rectify the situation.
T-Mobile is investigating a retail store employee who allegedly made unauthorized changes to a subscriber’s account in an elaborate scheme to steal the customer’s three-letter Instagram username. The modifications, which could have let the rogue employee empty bank accounts associated with the targeted T-Mobile subscriber, were made even though the victim customer already had taken steps recommended by the mobile carrier to help minimize the risks of account takeover. Here’s what happened, and some tips on how you can protect yourself from a similar fate.
Earlier this month, KrebsOnSecurity heard from Paul Rosenzweig, a 27-year-old T-Mobile customer from Boston who had his wireless account briefly hijacked. Rosenzweig had previously adopted T-Mobile’s advice to customers about blocking mobile number port-out scams, an increasingly common scheme in which identity thieves armed with a fake ID in the name of a targeted customer show up at a retail store run by a different wireless provider and ask that the number to be transferred to the competing mobile company’s network.
So-called “port out” scams allow crooks to intercept your calls and messages while your phone goes dark. Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves who have already stolen a target’s password(s) can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.
In this case, however, the perpetrator didn’t try to port Rosenzweig’s phone number: Instead, the attacker called multiple T-Mobile retail stores within an hour’s drive of Rosenzweig’s home address until he succeeded in convincing a store employee to conduct what’s known as a “SIM swap.”
A SIM swap is a legitimate process by which a customer can request that a new SIM card (the tiny, removable chip in a mobile device that allows it to connect to the provider’s network) be added to the account. Customers can request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.
However, thieves and other ne’er-do-wells can abuse this process by posing as a targeted mobile customer or technician and tricking employees at the mobile provider into swapping in a new SIM card for that customer on a device that they control. If successful, the SIM swap accomplishes more or less the same result as a number port out (at least in the short term) — effectively giving the attackers access to any text messages or phone calls that are sent to the target’s mobile account.
Rosenzweig said the first inkling he had that something wasn’t right with his phone was on the evening of May 2, 2018, when he spotted an automated email from Instagram. The message said the email address tied to the three-letter account he’d had on the social media platform for seven years — instagram.com/par — had been changed. He quickly logged in to his Instagram account, changed his password and then reverted the email on the account back to his original address.
By this time, the SIM swap conducted by the attacker had already been carried out, although Rosenzweig said he didn’t notice his phone displaying zero bars and no connection to T-Mobile at the time because he was at home and happily surfing the Web on his device using his own wireless network.
The following morning, Rosenzweig received another notice — this one from Snapchat — stating that the password for his account there (“p9r”) had been changed. He subsequently reset the Instagram password and then enabled two factor authentication on his Snapchat account.
“That was when I realized my phone had no bars,” he recalled. “My phone was dead. I couldn’t even call 611,” [the mobile short number that all major wireless providers make available to reach their customer service departments].”
It appears that the perpetrator of the SIM swap abused not only internal knowledge of T-Mobile’s systems, but also a lax password reset process at Instagram. The social network allows users to enable notifications on their mobile phone when password resets or other changes are requested on the account.
But this isn’t exactly two-factor authentication because it also lets users reset their passwords via their mobile account by requesting a password reset link to be sent to their mobile device. Thus, if someone is in control of your mobile phone account, they can reset your Instagram password (and probably a bunch of other types of accounts).
Rosenzweig said even though he was able to reset his Instagram password and restore his old email address tied to the account, the damage was already done: All of his images and other content he’d shared on Instagram over the years was still tied to his account, but the attacker had succeeded in stealing his “par” username, leaving him with a slightly less sexy “par54384321,” (apparently chosen for him at random by either Instagram or the attacker).
As I wrote in November 2015, short usernames are something of a prestige or status symbol for many youngsters, and some are willing to pay surprising sums of money for them. Known as “OG” (short for “original” and also “original gangster”) in certain circles online, these can be usernames for virtually any service, from email accounts at Webmail providers to social media services like Instagram, Snapchat, Twitter and Youtube.
People who traffic in OG accounts prize them because they can make the account holder appear to have been a savvy, early adopter of the service before it became popular and before all of the short usernames were taken.
Rosenzweig said a friend helped him work with T-Mobile to regain control over his account and deactivate the rogue SIM card. He said he’s grateful the attackers who hijacked his phone for a few hours didn’t try to drain bank accounts that also rely on his mobile device for authentication.
“It definitely could have been a lot worse given the access they had,” he said.
But throughout all of this ordeal, it struck Rosenzweig as odd that he never once received an email from T-Mobile stating that his SIM card had been swapped.
“I’m a software engineer and I thought I had pretty good security habits to begin with,” he said. “I never re-use passwords, and it’s hard to see what I could have done differently here. The flaw here was with T-Mobile mostly, but also with Instagram. It seems like by having the ability to change one’s [Instagram] password by email or by mobile alone negates the second factor and it becomes either/or from the attackers point of view.”
Sources close to the investigation say T-Mobile is investigating a current or former employee as the likely culprit. The mobile company also acknowledged that it does not currently send customers an email to the email address on file when SIM swaps take place. A T-Mobile spokesperson said the company was considering changing the current policy, which sends the customer a text message to alert them about the SIM swap.
“We take our customers privacy and security very seriously and we regret that this happened,” the company said in a written statement. “We notify our customers immediately when SIM changes occur, but currently we do not send those notifications via email. We are actively looking at ways to improve our processes in this area.”
In summary, when a SIM swap happens on a T-Mobile account, T-Mobile will send a text message to the phone equipped with the new SIM card. But obviously that does not help someone who is the target of a SIM swap scam.
As we can see, just taking T-Mobile’s advice to place a personal identification number (PIN) on your account to block number port out scams does nothing to flag one’s account to make it harder to conduct SIM swap scams.
Rather, T-Mobile says customers need to call in to the company’s customer support line and place a separate “SIM lock” on their account, which can only be removed if the customer shows up at a retail store with ID (or, presumably, anyone with a fake ID who also knows the target’s Social Security Number and date of birth).
I checked with the other carriers to see if they support locking the customer’s current SIM to the account on file. I suspect they do, and will update this piece when/if I hear back from them. In the meantime, it might be best just to phone up your carrier and ask.
Please note that a SIM lock on your mobile account is separate from a SIM PIN that you can set via your mobile phone’s operating system. A SIM PIN is essentially an additional layer of physical security that locks the current SIM to your device, requiring you to input a special PIN when the device is powered on in order to call, text or access your data plan on your phone. This feature can help block thieves from using your phone or accessing your data if you lose your phone, but it won’t stop thieves from physically swapping in their own SIM card.
iPhone users can follow these instructions to set or change a device’s SIM PIN. Android users can see this page. You may need to enter a carrier-specific default PIN before being able to change it. By default, the SIM PIN for all Verizon and AT&T phones is “1111;” for T-Mobile and Sprint it should default to “1234.”
Be advised, however, that if you forget your SIM PIN and enter the wrong PIN too many times, you may end up having to contact your wireless carrier to obtain a special “personal unlocking key” (PUK).
At the very least, if you haven’t already done so please take a moment to place a port block PIN on your account. This story explains exactly how to do that.
Also, consider reviewing twofactorauth.org to see whether you are taking full advantage of any multi-factor authentication offerings so that your various accounts can’t be trivially hijacked if an attacker happens to guess, steal, phish or otherwise know your password.
One-time login codes produced by mobile apps such as Authy, Duo or Google Authenticator are more secure than one-time codes sent via automated phone call or text — mainly because crooks can’t steal these codes if they succeed in porting your mobile number to another service or by executing a SIM swap on your mobile account [full disclosure: Duo is an advertiser on this blog].
Tracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site
LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.
On May 10, The New York Times broke the news that a different cell phone location tracking company called Securus Technologies had been selling or giving away location data on customers of virtually any major mobile network provider to a sheriff’s office in Mississippi County, Mo.
On May 15, ZDnet.com ran a piece saying that Securus was getting its data through an intermediary — Carlsbad, CA-based LocationSmart.
Wednesday afternoon Motherboard published another bombshell: A hacker had broken into the servers of Securus and stolen 2,800 usernames, email addresses, phone numbers and hashed passwords of authorized Securus users. Most of the stolen credentials reportedly belonged to law enforcement officers across the country — stretching from 2011 up to this year.
Several hours before the Motherboard story went live, KrebsOnSecurity heard from Robert Xiao, a security researcher at Carnegie Mellon University who’d read the coverage of Securus and LocationSmart and had been poking around a demo tool that LocationSmart makes available on its Web site for potential customers to try out its mobile location technology.
LocationSmart’s demo is a free service that allows anyone to see the approximate location of their own mobile phone, just by entering their name, email address and phone number into a form on the site. LocationSmart then texts the phone number supplied by the user and requests permission to ping that device’s nearest cellular network tower.
But according to Xiao, a PhD candidate at CMU’s Human-Computer Interaction Institute, this same service failed to perform basic checks to prevent anonymous and unauthorized queries. Translation: Anyone with a modicum of knowledge about how Web sites work could abuse the LocationSmart demo site to figure out how to conduct mobile number location lookups at will, all without ever having to supply a password or other credentials.
“I stumbled upon this almost by accident, and it wasn’t terribly hard to do,” Xiao said. “This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.”
Xiao said his tests showed he could reliably query LocationSmart’s service to ping the cell phone tower closest to a subscriber’s mobile device. Xiao said he checked the mobile number of a friend several times over a few minutes while that friend was moving. By pinging the friend’s mobile network multiple times over several minutes, he was then able to plug the coordinates into Google Maps and track the friend’s directional movement.
“This is really creepy stuff,” Xiao said, adding that he’d also successfully tested the vulnerable service against one Telus Mobility mobile customer in Canada who volunteered to be found.
Before LocationSmart’s demo was taken offline today, KrebsOnSecurity pinged five different trusted sources, all of whom gave consent to have Xiao determine the whereabouts of their cell phones. Xiao was able to determine within a few seconds of querying the public LocationSmart service the near-exact location of the mobile phone belonging to all five of my sources.
One of those sources said the longitude and latitude returned by Xiao’s queries came within 100 yards of their then-current location. Another source said the location found by the researcher was 1.5 miles away from his current location. The remaining three sources said the location returned for their phones was between approximately 1/5 to 1/3 of a mile at the time.
Reached for comment via phone, LocationSmart Founder and CEO Mario Proietti said the company was investigating.
“We don’t give away data,” Proietti said. “We make it available for legitimate and authorized purposes. It’s based on legitimate and authorized use of location data that only takes place on consent. We take privacy seriously and we’ll review all facts and look into them.”
LocationSmart’s home page features the corporate logos of all four the major wireless providers, as well as companies like Google, Neustar, ThreatMetrix, and U.S. Cellular. The company says its technologies help businesses keep track of remote employees and corporate assets, and that it helps mobile advertisers and marketers serve consumers with “geo-relevant promotions.”
It’s not clear exactly how long LocationSmart has offered its demo service or for how long the service has been so permissive; this link from archive.org suggests it dates back to at least January 2017.
But these assurances may ring hollow to anyone with a cell phone who’s concerned about having their physical location revealed at any time. The component of LocationSmart’s Web site that can be abused to look up mobile location data at will is an insecure “application programming interface” or API — an interactive feature designed to display data in response to specific queries by Web site visitors. Although the LocationSmart’s demo page required users to consent to having their phone located by the service, LocationSmart apparently did nothing to prevent or authenticate direct interaction with the API itself.
API authentication weaknesses are not uncommon, but they can lead to the exposure of sensitive data on a great many people in a short period of time. In April 2018, KrebsOnSecurity broke the story of an API at the Web site of fast-casual bakery chain PaneraBread.com that exposed the names, email and physical addresses, birthdays and last four digits of credit cards on file for tens of millions of customers who’d signed up for an account at PaneraBread to order food online.
In a May 9 letter sent to the top four wireless carriers and to the U.S. Federal Communications Commission in the wake of revelations about Securus’ alleged practices, Sen. Ron Wyden (D-Ore.) urged all parties to take “proactive steps to prevent the unrestricted disclosure and potential abuse of private customer data.”
“Securus informed my office that it purchases real-time location information on AT&T’s customers — through a third party location aggregator that has a commercial relationship with the major wireless carriers — and routinely shares that information with its government clients,” Wyden wrote. “This practice skirts wireless carrier’s legal obligation to be the sole conduit by which the government may conduct surveillance of Americans’ phone records, and needlessly exposes millions of Americans to potential abuse and unchecked surveillance by the government.”
Securus, which reportedly gets its cell phone location data from LocationSmart, told The New York Times that it requires customers to upload a legal document — such as a warrant or affidavit — and to certify that the activity was authorized. But in his letter, Wyden said “senior officials from Securus have confirmed to my office that it never checks the legitimacy of those uploaded documents to determine whether they are in fact court orders and has dismissed suggestions that it is obligated to do so.”
Securus did not respond to requests for comment.THE CARRIERS RESPOND
It remains unclear what, if anything, AT&T, Sprint, T-Mobile and Verizon plan to do about any of this. A third-party firm leaking customer location information not only would almost certainly violate each mobile providers own stated privacy policies, but the real-time exposure of this data poses serious privacy and security risks for virtually all U.S. mobile customers (and perhaps beyond, although all my willing subjects were inside the United States).
None of the major carriers would confirm or deny a formal business relationship with LocationSmart, despite LocationSmart listing them each by corporate logo on its Web site.
AT&T spokesperson Jim Greer said AT&T does not permit the sharing of location information without customer consent or a demand from law enforcement.
“If we learn that a vendor does not adhere to our policy we will take appropriate action,” Greer said.
A T-Mobile spokesperson said that after receiving Sen. Wyden’s letter, the company quickly shut down any transaction of customer location data to Securus.
“We are continuing to investigate this matter,” a T-Mobile spokesperson wrote via email. T-Mobile has not yet responded to requests specifically about LocationSmart.
Sprint officials shared the following statement:
“We will answer the questions raised in Sen. Wyden’s letter directly through appropriate channels. However, it is important to note that Sprint’s relationship with Securus does not include data sharing, and is limited to supporting efforts to curb unlawful use of contraband cellphones in correctional facilities.”WHAT NOW?
Stephanie Lacambra, a staff attorney with the the nonprofit Electronic Frontier Foundation, said that wireless customers in the United States cannot opt out of location tracking by their own mobile providers. For starters, carriers constantly use this information to provide more reliable service to the customers. Also, by law wireless companies need to be able to ascertain at any time the approximate location of a customer’s phone in order to comply with emergency 911 regulations.
But unless and until Congress and federal regulators make it more clear how and whether customer location information can be shared with third-parties, mobile device customers may continue to have their location information potentially exposed by a host of third-party companies, Lacambra said.
“This is precisely why we have lobbied so hard for robust privacy protections for location information,” she said. “It really should be only that law enforcement is required to get a warrant for this stuff, and that’s the rule we’ve been trying to push for.”
Chris Calabrese is vice president of the Center for Democracy & Technology, a policy think tank in Washington, D.C. Calabrese said the current rules about mobile subscriber location information are governed by the Electronic Communications Privacy Act (ECPA), a law passed in 1986 that hasn’t been substantially updated since.
“The law here is really out of date,” Calabrese said. “But I think any processes that involve going to third parties who don’t verify that it’s a lawful or law enforcement request — and that don’t make sure the evidence behind that request is legitimate — are hugely problematic and they’re major privacy violations.”
“I would be very surprised if any mobile carrier doesn’t think location information should be treated sensitively, and I’m sure none of them want this information to be made public,” Calabrese continued. “My guess is the carriers are going to come down hard on this, because it’s sort of their worst nightmare come true. We all know that cell phones are portable tracking devices. There’s a sort of an implicit deal where we’re okay with it because we get lots of benefits from it, but we all also assume this information should be protected. But when it isn’t, that presents a major problem and I think these examples would be a spur for some sort of legislative intervention if they weren’t fixed very quickly.”
For his part, Xiao says we’re likely to see more leaks from location tracking companies like Securus and LocationSmart as long as the mobile carriers are providing third party companies any access to customer location information.
“We’re going to continue to see breaches like this happen until access to this data can be much more tightly controlled,” he said.
Much of the fraud involving counterfeit credit, ATM debit and retail gift cards relies on the ability of thieves to use cheap, widely available hardware to encode stolen data onto any card’s magnetic stripe. But new research suggests retailers and ATM operators could reliably detect counterfeit cards using a simple technology that flags cards which appear to have been altered by such tools.
Researchers at the University of Florida found that account data encoded on legitimate cards is invariably written using quality-controlled, automated facilities that tend to imprint the information in uniform, consistent patterns.
Cloned cards, however, usually are created by hand with inexpensive encoding machines, and as a result feature far more variance or “jitter” in the placement of digital bits on the card’s stripe.
Gift cards can be extremely profitable and brand-building for retailers, but gift card fraud creates a very negative shopping experience for consumers and a costly conundrum for retailers. The FBI estimates that while gift card fraud makes up a small percentage of overall gift card sales and use, approximately $130 billion worth of gift cards are sold each year.
One of the most common forms of gift card fraud involves thieves tampering with cards inside the retailer’s store — before the cards are purchased by legitimate customers. Using a handheld card reader, crooks will swipe the stripe to record the card’s serial number and other data needed to duplicate the card.
If there is a PIN on the gift card packaging, the thieves record that as well. In many cases, the PIN is obscured by a scratch-off decal, but gift card thieves can easily scratch those off and then replace the material with identical or similar decals that are sold very cheaply by the roll online.
“They can buy big rolls of that online for almost nothing,” said Patrick Traynor, an associate professor of computer science at the University of Florida. “Retailers we’ve worked with have told us they’ve gone to their gift card racks and found tons of this scratch-off stuff on the ground near the racks.”
At this point the cards are still worthless because they haven’t yet been activated. But armed with the card’s serial number in PIN, thieves can simply monitor the gift card account at the retailer’s online portal and wait until the cards are paid for and activated at the checkout register by an unwitting shopper.
Once a card is activated, thieves can encode that card’s data onto any card with a magnetic stripe and use that counterfeit to purchase merchandise at the retailer. The stolen goods typically are then sold online or on the street. Meanwhile, the person who bought the card (or the person who received it as a gift) finds the card is drained of funds when they eventually get around to using it at a retail store.
Traynor and a team of five other University of Florida researchers partnered with retail giant WalMart to test their technology, which Traynor said can be easily and quite cheaply incorporated into point-of-sale systems at retail store cash registers. They said the WalMart trial demonstrated that researchers’ technology distinguished legitimate gift cards from clones with up to 99.3 percent accuracy.
While impressive, that rate still means the technology could still generate a “false positive” — erroneously flagging a legitimate customer as using a fraudulently obtained gift card — in about one in every XXX times. But Traynor said the retailers they spoke with in testing their equipment all indicated they would welcome any additional tools to curb the incidence of gift card fraud.
“We’ve talked with quite a few retail loss prevention folks,” he said. “Most said even if they can simply flag the transaction and make a note of the person [presenting the cloned card] that this would be a win for them. Often, putting someone on notice that loss prevention is watching is enough to make them stop — at least at that store. From our discussions with a few big-box retailers, this kind of fraud is probably their newest big concern, although they don’t talk much about it publicly. If the attacker does any better than simply cloning the card to a blank white card, they’re pretty much powerless to stop the attack, and that’s a pretty consistent story behind closed doors.”BEYOND GIFT CARDS
Traynor said the University of Florida team’s method works even more accurately in detecting counterfeit ATM and credit cards, thanks to the dramatic difference in jitter between bank-issued cards and those cloned by thieves.
The magnetic material on most gift cards bears a quality that’s known in the industry as “low coercivity.” The stripe on so-called “LoCo” cards is usually brown in color, and new data can be imprinted on them quite cheaply using a machine that emits a relatively low or weak magnetic field. Hotel room keys also rely on LoCo stripes, which is why they tend to so easily lose their charge (particularly when placed next to something else with a magnetic charge).
In contrast, “high coercivity” (HiCo) stripes like those found on bank-issued debit and credit cards are usually black in color, hold their charge much longer, and are far more durable than LoCo cards. The downside of HiCo cards is that they are more expensive to produce, often relying on complex machinery and sophisticated manufacturing processes that encode the account data in highly uniform patterns.
Traynor said tests indicate their technology can detect cloned bank cards with virtually zero false-positives. In fact, when the University of Florida team first began seeing positive results from their method, they originally pitched the technique as a way for banks to cut losses from ATM skimming and other forms of credit and debit card fraud.
Yet, Traynor said fellow academicians who reviewed their draft paper told them that banks probably wouldn’t invest in the technology because most financial institutions are counting on newer, more sophisticated chip-based (EMV) cards to eventually reduce counterfeit fraud losses.
“The original pitch on the paper was actually focused on credit cards, but academic reviewers were having trouble getting past EMV — as in, “EMV solves this and it’s universally deployed – so why is this necessary?'”, Traynor said. “We just kept getting reviews back from other academics saying that credit and bank card fraud is a solved problem.”
The trouble is that virtually all chip cards still store account data in plain text on the magnetic stripe on the back of the card — mainly so that the cards can be used in ATM and retail locations that are not yet equipped to read chip-based cards. As a result, even European countries whose ATMs all require chip-based cards remain heavily targeted by skimming gangs because the data on the chip card’s magnetic stripe can still be copied by a skimmer and used by thieves in the United States.
The University of Florida researchers recently were featured in an Associated Press story about an anti-skimming technology they developed and dubbed the “Skim Reaper.” The device, which can be made cheaply using a 3D printer, fits into the mouth of ATM’s card acceptance slot and can detect the presence of extra card reading devices that skimmer thieves may have fitted on top of or inside the cash machine.
The AP story quoted a New York Police Department financial crimes detective saying the Skim Reapers worked remarkably well in detecting the presence of ATM skimmers. But Traynor said many ATM operators and owners are simply uninterested in paying to upgrade their machines with their technology — in large part because the losses from ATM card counterfeiting are mostly assumed by consumers and financial institutions.
“We found this when we were talking around with the cops in New York City, that the incentive of an ATM bodega owner to upgrade an ATM is very low,” Traynor said. “Why should they go to that extent? Upgrades required to make these machines [chip-card compliant] are significant in cost, and the motivation is not necessarily there.”
Retailers also could choose to produce gift cards with embedded EMV chips that make the cards more expensive and difficult to counterfeit. But doing so likely would increase the cost of manufacturing by $2 to $3 per card, Traynor said.
“Putting a chip on the card dramatically increases the cost, so a $10 gift card might then have a $3 price added,” he said. “And you can imagine the reaction a customer might have when asked to pay $13 for a gift card that has a $10 face value.”
A copy of the University of Florida’s research paper is available here (PDF).
The FBI has compiled a list of recommendations for reducing the likelihood of being victimized by gift card fraud. For starters, when buying in-store don’t just pick cards right off the rack. Look for ones that are sealed in packaging or stored securely behind the checkout counter. Also check the scratch-off area on the back to look for any evidence of tampering.
Here are some other tips from the FBI:
-If possible, only buy cards online directly from the store or restaurant.
-If buying from a secondary gift card market website, check reviews and only buy from or sell to reputable dealers.
-Check the gift card balance before and after purchasing the card to verify the correct balance on the card.
-The re-seller of a gift card is responsible for ensuring the correct balance is on the gift card, not the merchant whose name is listed. If you are scammed, some merchants in some situations will replace the funds. Ask for, but don’t expect, help.
-When selling a gift card through an online marketplace, do not provide the buyer with the card’s PIN until the transaction is complete.
-When purchasing gift cards online, be leery of auction sites selling gift cards at a steep discount or in bulk.
I spent a few days last week speaking at and attending a conference on responding to identity theft. The forum was held in Florida, one of the major epicenters for identity fraud complaints in United States. One gripe I heard from several presenters was that identity thieves increasingly are finding ways to open new mobile phone accounts in the names of people who have already frozen their credit files with the big-three credit bureaus. Here’s a look at what may be going on, and how you can protect yourself.
Carrie Kerskie is director of the Identity Fraud Institute at Hodges University in Naples. A big part of her job is helping local residents respond to identity theft and fraud complaints. Kerskie said she’s had multiple victims in her area recently complain of having cell phone accounts opened in their names even though they had already frozen their credit files at the big three credit bureaus — Equifax, Experian and Trans Union (as well as distant fourth bureau Innovis).
The freeze process is designed so that a creditor should not be able to see your credit file unless you unfreeze the account. A credit freeze blocks potential creditors from being able to view or “pull” your credit file, making it far more difficult for identity thieves to apply for new lines of credit in your name.
But Kerskie’s investigation revealed that the mobile phone merchants weren’t asking any of the four credit bureaus mentioned above. Rather, the mobile providers were making credit queries with the National Consumer Telecommunications and Utilities Exchange (NCTUE), or nctue.com.
“We’re finding that a lot of phone carriers — even some of the larger ones — are relying on NCTUE for credit checks,” Kerskie said. “It’s mainly phone carriers, but utilities, power, water, cable, any of those, they’re all starting to use this more.”
The NCTUE is a consumer reporting agency founded by AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE.
Who are the NCTUE’s members? If you call the 800-number that NCTUE makes available to get a free copy of your NCTUE credit report, the option for “more information” about the organization says there are four “exchanges” that feed into the NCTUE’s system: the NCTUE itself; something called “Centralized Credit Check Systems“; the New York Data Exchange; and the California Utility Exchange.
According to a partner solutions page at Verizon, the New York Data Exchange is a not-for-profit entity created in 1996 that provides participating exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts.
The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax). Verizon is one of many telecom providers that use the NYDE (and recall that AT&T was the founder of NCTUE).
The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC).
Google has virtually no useful information available about an entity called Centralized Credit Check Systems. It’s possible it no longer exists. If anyone finds differently, please leave a note in the comments section.
When I did some more digging on the NCTUE, I discovered…wait for it…Equifax also is the sole contractor that manages the NCTUE database. The entity’s site is also hosted out of Equifax’s servers. Equifax’s current contract to provide this service expires in 2020, according to a press release posted in 2015 by Equifax.RED LIGHT. GREEN LIGHT. RED LIGHT.
Fortunately, the NCTUE makes it fairly easy to obtain any records they may have on Americans. Simply phone them up (1-866-349-5185) and provide your Social Security number and the numeric portion of your registered street address.
Assuming the automated system can verify you with that information, the system then orders an NCTUE credit report to be sent to the address on file. You can also request to be sent a free “risk score” assigned by the NCTUE for each credit file it maintains.
The NCTUE also offers an online process for freezing one’s report. Perhaps unsurprisingly, however, the process for ordering a freeze through the NCTUE appears to be completely borked at the moment, thanks no doubt to Equifax’s well documented abysmal security practices.
Alternatively, it could all be part of a willful or negligent strategy to continue discouraging Americans from freezing their credit files (experts say the bureaus make about $1 for each time they sell your file to a potential creditor).
On April 29, I had an occasion to visit Equifax’s credit freeze application page, and found that the site was being served with an expired SSL certificate from Symantec (i.e., the site would not let me browse using https://). This happened because I went to the site using Google Chrome, and Google announced a decision in September 2017 to no longer trust SSL certs issued by Symantec prior to June 1, 2016.
Google said it would do this starting with Google Chrome version 66. It did not keep this plan a secret. On April 18, Google pushed out Chrome 66. Despite all of the advance warnings, the security people at Equifax apparently missed the memo and in so doing probably scared most people away from its freeze page for several weeks (Equifax fixed the problem on its site sometime after I tweeted about the expired certificate on April 29).
That’s because when one uses Chrome to visit a site whose encryption certificate is validated by one of these unsupported Symantec certs, Chrome puts up a dire security warning that would almost certainly discourage most casual users from continuing.
On May 7, when I visited the NCTUE’s page for freezing my credit file with them I was presented with the very same connection SSL security alert from Chrome, warning of an invalid Symantec certificate and that any data I shared with the NCTUE’s freeze page would not be encrypted in transit.
When I clicked through past the warnings and proceeded to the insecure NCTUE freeze form (which is worded and stylized almost exactly like Equifax’s credit freeze page), I filled out the required information to freeze my NCTUE file. See if you can guess what happened next.
Yep, I was unceremoniously declined the opportunity to do that. “We are currently unable to service your request,” read the resulting Web page, without suggesting alternative means of obtaining its report. “Please try again later.”
This scenario will no doubt be familiar to many readers who tried (and failed in a similar fashion) to file freezes on their credit files with Equifax after the company divulged that hackers had relieved it of Social Security numbers, addresses, dates of birth and other sensitive data on nearly 150 million Americans last September. I attempted to file a freeze via the NCTUE’s site with no fewer than three different browsers, and each time the form reset itself upon submission or took me to a failure page.
So let’s review. Many people who have succeeded in freezing their credit files with Equifax have nonetheless had their identities stolen and new accounts opened in their names thanks to a lesser-known credit bureau that seems to rely entirely on credit checking entities operated by Equifax.
“This just reinforces the fact that we are no longer in control of our information,” said Kerskie, who is also a founding member of Griffon Force, a Florida-based identity theft restoration firm.
I find it difficult to disagree with Kerskie’s statement. What chaps me about this discovery is that countless Americans are in many cases plunking down $3-$10 per bureau to freeze their credit files, and yet a huge player in this market is able to continue to profit off of identity theft on those same Americans.EQUIFAX RESPONDS
I asked Equifax why the very same credit bureau operating the NCTUE’s data exchange (and those of at least two other contributing members) couldn’t detect when consumers had placed credit freezes with Equifax. Put simply, Equifax’s wall of legal verbiage below says mainly that NCTUE is a separate entity from Equifax, and that NCTUE doesn’t include Equifax credit information.
Here is Equifax’s full statement on the matter:
· The National Consumer Telecom and Utilities Exchange, Inc. (NCTUE) is a nationwide, member-owned and operated, FCRA-compliant consumer reporting agency that houses both positive and negative consumer payment data reported by its members, such as new connect requests, payment history, and historical account status and/or fraudulent accounts. NCTUE members are providers of telecommunications and pay/satellite television services to consumers, as well as utilities providing gas, electrical and water services to consumers.
· This information is available to NCTUE members and, on a limited basis, to certain other customers of NCTUE’s contracted exchange operator, Equifax Information Services, LLC (Equifax) – typically financial institutions and insurance providers. NCTUE does not include Equifax credit information, and Equifax is not a member of NCTUE, nor does Equifax own any aspect of NCTUE. NCTUE does not provide telecommunications pay/ satellite television or utility services to consumers, and consumers do not apply for those services with NCTUE.
· As a consumer reporting agency, NCTUE places and lifts security freezes on consumer files in accordance with the state law applicable to the consumer. NCTUE also maintains a voluntary security freeze program for consumers who live in states which currently do not have a security freeze law.
· NCTUE is a separate consumer reporting agency from Equifax and therefore a consumer would need to independently place and lift a freeze with NCTUE.
· While state laws vary in the manner in which consumers can place or lift a security freeze (temporarily or permanently), if a consumer has a security freeze on his or her NCTUE file and has not temporarily lifted the freeze, a creditor or other service provider, such as a mobile phone provider, generally cannot access that consumer’s NCTUE report in connection with a new account opening. However, the creditor or provider may be able to access that consumer’s credit report from another consumer reporting agency in order to open a new account, or decide to open the account without accessing a credit report from any consumer reporting agency, such as NCTUE or Equifax.PLACING THE FREEZE
I was able to successfully place a freeze on my NCTUE report by calling their 800-number — 1-866-349-5355. The message said the NCTUE might charge a fee for placing or lifting the freeze, in accordance with state freeze laws.
Depending on your state of residence, the cost of placing a freeze on your credit file at Equifax, Experian or Trans Union can run between $3 and $10 per credit bureau, and in many states the bureaus also can charge fees for temporarily “thawing” and removing a freeze (according to a list published by Consumers Union, residents of four states — Indiana, Maine, North Carolina, South Carolina — do not need to pay to place, thaw or lift a freeze).
While my home state of Virginia allows the bureaus to charge $10 to place a freeze, for whatever reason the NCTUE did not assess that fee when I placed my freeze request with them. When and if your freeze request does get approved using the NCTUE’s automated phone system, make sure you have pen and paper or a keyboard handy to jot down the freeze PIN, which you will need in the event you ever wish to lift the freeze. When the system read my freeze PIN, it was read so quickly that I had to hit “*” on the dial pad several times to repeat the message.
It’s frankly absurd that consumers should ever have to pay to freeze their credit files at all, and yet a recent study indicates that almost 20 percent of Americans chose to do so at one or more of the three major credit bureaus since Equifax announced its breach last fall. The total estimated cost to consumers in freeze fees? $1.4 billion.
A bill in the U.S. Senate that looks likely to pass this year would require credit-reporting firms to let consumers place a freeze without paying. The free freeze component of the bill is just a tiny provision in a much larger banking reform bill — S. 2155 — that consumer groups say will roll back some of the consumer and market protections put in place after the Great Recession of the last decade.
“It’s part of a big banking bill that has provisions we hate,” said Chi Chi Wu, a staff attorney with the National Consumer Law Center. “It has some provisions not having to do with credit reporting, such as rolling back homeowners disclosure act provisions, changing protections in [current law] having to do with systemic risk.”
Sen. Jack Reed (D-RI) has offered a bill (S. 2362) that would invert the current credit reporting system by making all consumer credit files frozen by default, forcing consumers to unfreeze their files whenever they wish to obtain new credit. Meanwhile, several other bills would impose slightly less dramatic changes to the consumer credit reporting industry.
Wu said that while S. 2155 appears steaming toward passage, she doubts any of the other freeze-related bills will go anywhere.
“None of these bills that do something really strong are moving very far,” she said.
I should note that NCTUE does offer freeze alternatives. Just like with the big four, NCTUE lets consumers place a somewhat less restrictive “fraud alert” on their file indicating that verbal permission should be obtained over the phone from a consumer before a new account can be opened in their name.
Here is a primer on freezing your credit file with the big three bureaus, including Innovis. This tutorial also includes advice on placing a security alert at ChexSystems, which is used by thousands of banks to verify customers that are requesting new checking and savings accounts. In addition, consumers can opt out of pre-approved credit offers by calling 1-888-5-OPT-OUT (1-888-567-8688), or visit optoutprescreen.com.
Oh, and if you don’t want Equifax sharing your salary history over the life of your entire career, you might want to opt out of that program as well.
Equifax and its ilk may one day finally be exposed for the digital dinosaurs that they are. But until that day, if you care about your identity you now may have another freeze to worry about. And if you decide to take the step of freezing your file at the NCTUE, please sound off about your experience in the comments below.
Microsoft today released a bundle of security updates to fix at least 67 holes in its various Windows operating systems and related software, including one dangerous flaw that Microsoft warns is actively being exploited. Meanwhile, as it usually does on Microsoft’s Patch Tuesday — the second Tuesday of each month — Adobe has a new Flash Player update that addresses a single but critical security weakness.
First, the Flash Tuesday update, which brings Flash Player to v. 188.8.131.52. Some (present company included) would that Flash Player is itself “a single but critical security weakness.” Nevertheless, Google Chrome and Internet Explorer/Edge ship with their own versions of Flash, which get updated automatically when new versions of these browsers are made available.
You can check if your browser has Flash installed/enabled and what version it’s at by pointing your browser at this link. Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability.
Google Chrome blocks Flash from running on all but a handful of popular sites, and then only after user approval. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist/blacklist specific sites. If you spot an upward pointing arrow to the right of the address bar in Chrome, that means there’s an update to the browser available, and it’s time to restart Chrome.
For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis.
Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits. Microsoft users will need to install this month’s batch of patches to get the latest Flash version for IE/Edge, where most of the critical updates in this month’s patch batch reside.
According to security vendor Qualys, one Microsoft patch in particular deserves priority over others in organizations that are testing updates before deploying them: CVE-2018-8174 involves a problem with the way the Windows scripting engine handles certain objects, and Microsoft says this bug is already being exploited in active attacks.
As always, please feel free to leave a comment below if you experience any issues applying any of these updates.
A monster distributed denial-of-service attack (DDoS) against KrebsOnSecurity.com in 2016 knocked this site offline for nearly four days. The attack was executed through a network of hacked “Internet of Things” (IoT) devices such as Internet routers, security cameras and digital video recorders. A new study that tries to measure the direct cost of that one attack for IoT device users whose machines were swept up in the assault found that it may have cost device owners a total of $323,973.75 in excess power and added bandwidth consumption.
But really, none of it was my fault at all. It was mostly the fault of IoT makers for shipping cheap, poorly designed products (insecure by default), and the fault of customers who bought these IoT things and plugged them onto the Internet without changing the things’ factory settings (passwords at least.)
The botnet that hit my site in Sept. 2016 was powered by the first version of Mirai, a malware strain that wriggles into dozens of IoT devices left exposed to the Internet and running with factory-default settings and passwords. Systems infected with Mirai are forced to scan the Internet for other vulnerable IoT devices, but they’re just as often used to help launch punishing DDoS attacks.
By the time of the first Mirai attack on this site, the young masterminds behind Mirai had already enslaved more than 600,000 IoT devices for their DDoS armies. But according to an interview with one of the admitted and convicted co-authors of Mirai, the part of their botnet that pounded my site was a mere slice of firepower they’d sold for a few hundred bucks to a willing buyer. The attack army sold to this ne’er-do-well harnessed the power of just 24,000 Mirai-infected systems (mostly security cameras and DVRs, but some routers, too).
These 24,000 Mirai devices clobbered my site for several days with data blasts of up to 620 Gbps. The attack was so bad that my pro-bono DDoS protection provider at the time — Akamai — had to let me go because the data firehose pointed at my site was starting to cause real pain for their paying customers. Akamai later estimated that the cost of maintaining protection against my site in the face of that onslaught would have run into the millions of dollars.
We’re getting better at figuring out the financial costs of DDoS attacks to the victims (5, 6 or 7 -digit dollar losses) and to the perpetrators (zero to hundreds of dollars). According to a report released this year by DDoS mitigation giant NETSCOUT Arbor, fifty-six percent of organizations last year experienced a financial impact from DDoS attacks for between $10,000 and $100,000, almost double the proportion from 2016.
But what if there were also a way to work out the cost of these attacks to the users of the IoT devices which get snared by DDos botnets like Mirai? That’s what researchers at University of California, Berkeley School of Information sought to determine in their new paper, “rIoT: Quantifying Consumer Costs of Insecure Internet of Things Devices.”
If we accept the UC Berkeley team’s assumptions about costs borne by hacked IoT device users (more on that in a bit), the total cost of added bandwidth and energy consumption from the botnet that hit my site came to $323,973.95. This may sound like a lot of money, but remember that broken down among 24,000 attacking drones the per-device cost comes to just $13.50.
So let’s review: The attacker who wanted to clobber my site paid a few hundred dollars to rent a tiny portion of a much bigger Mirai crime machine. That attack would likely have cost millions of dollars to mitigate. The consumers in possession of the IoT devices that did the attacking probably realized a few dollars in losses each, if that. Perhaps forever unmeasured are the many Web sites and Internet users whose connection speeds are often collateral damage in DDoS attacks.
Anyone noticing a slight asymmetry here in either costs or incentives? IoT security is what’s known as an “externality,” a term used to describe “positive or negative consequences to third parties that result from an economic transaction. When one party does not bear the full costs of its actions, it has inadequate incentives to avoid actions that incur those costs.”
In many cases negative externalities are synonymous with problems that the free market has a hard time rewarding individuals or companies for fixing or ameliorating, much like environmental pollution. The common theme with externalities is that the pain points to fix the problem are so diffuse and the costs borne by the problem so distributed across international borders that doing something meaningful about it often takes a global effort with many stakeholders — who can hopefully settle upon concrete steps for action and metrics to measure success.
The paper’s authors explain the misaligned incentives on two sides of the IoT security problem:
-“On the manufacturer side, many devices run lightweight Linux-based operating systems that open doors for hackers. Some consumer IoT devices implement minimal security. For example, device manufacturers may use default username and password credentials to access the device. Such design decisions simplify device setup and troubleshooting, but they also leave the device open to exploitation by hackers with access to the publicly-available or guessable credentials.”
-“Consumers who expect IoT devices to act like user-friendly ‘plug-and-play’ conveniences may have sufficient intuition to use the device but insufficient technical knowledge to protect or update it. Externalities may arise out of information asymmetries caused by hidden information or misaligned incentives. Hidden information occurs when consumers cannot discern product characteristics and, thus, are unable to purchase products that reflect their preferences. When consumers are unable to observe the security qualities of software, they instead purchase products based solely on price, and the overall quality of software in the market suffers.”
The UK Berkeley researchers concede that their experiments — in which they measured the power output and bandwidth consumption of various IoT devices they’d infected with a sandboxed version of Mirai — suggested that the scanning and DDoSsing activity prompted by a Mirai malware infection added almost negligible amounts in power consumption for the infected devices.
Thus, most of the loss figures cited for the 2016 attack rely heavily on estimates of how much the excess bandwidth created by a Mirai infection might cost users directly, and as such I suspect the $13.50 per machine estimates are on the high side.
No doubt, some Internet users get online via an Internet service provider that includes a daily “bandwidth cap,” such that over-use of the allotted daily bandwidth amount can incur overage fees and/or relegates the customer to a slower, throttled connection for some period after the daily allotted bandwidth overage.
But for a majority of high-speed Internet users, the added bandwidth use from a router or other IoT device on the network being infected with Mirai probably wouldn’t show up as an added line charge on their monthly bills. I asked the researchers about the considerable wiggle factor here:
“Regarding bandwidth consumption, the cost may not ever show up on a consumer’s bill, especially if the consumer has no bandwidth cap,” reads an email from the UC Berkeley researchers who wrote the report, including Kim Fong, Kurt Hepler, Rohit Raghavan and Peter Rowland.
“We debated a lot on how to best determine and present bandwidth costs, as it does vary widely among users and ISPs,” they continued. “Costs are more defined in cases where bots cause users to exceed their monthly cap. But even if a consumer doesn’t directly pay a few extra dollars at the end of the month, the infected device is consuming actual bandwidth that must be supplied/serviced by the ISP. And it’s not unreasonable to assume that ISPs will eventually pass their increased costs onto consumers as higher monthly fees, etc. It’s difficult to quantify the consumer-side costs of unauthorized use — which is likely why there’s not much existing work — and our stats are definitely an estimate, but we feel it’s helpful in starting the discussion on how to quantify these costs.”
Measuring bandwidth and energy consumption may turn out to be a useful and accepted tool to help more accurately measure the full costs of DDoS attacks. I’d love to see these tests run against a broader range of IoT devices in a much larger simulated environment.
If the Berkeley method is refined enough to become accepted as one of many ways to measure actual losses from a DDoS attack, the reporting of such figures could make these crimes more likely to be prosecuted.
Many DDoS attack investigations go nowhere because targets of these attacks fail to come forward or press charges, making it difficult for prosecutors to prove any real economic harm was done. Since many of these investigations die on the vine for a lack of financial damages reaching certain law enforcement thresholds to justify a federal prosecution (often $50,000 – $100,000), factoring in estimates of the cost to hacked machine owners involved in each attack could change that math.
But the biggest levers for throttling the DDoS problem are in the hands of the people running the world’s largest ISPs, hosting providers and bandwidth peering points on the Internet today. Some of those levers I detailed in the “Shaming the Spoofers” section of The Democraticization of Censorship, the first post I wrote after the attack and after Google had brought this site back online under its Project Shield program.
By the way, we should probably stop referring to IoT devices as “smart” when they start misbehaving within three minutes of being plugged into an Internet connection. That’s about how long your average cheapo, factory-default security camera plugged into the Internet has before getting successfully taken over by Mirai. In short, dumb IoT devices are those that don’t make it easy for owners to use them safely without being a nuisance or harm to themselves or others.
Maybe what we need to fight this onslaught of dumb devices are more network operators turning to ideas like IDIoT, a network policy enforcement architecture for consumer IoT devices that was first proposed in December 2017. The goal of IDIoT is to restrict the network capabilities of IoT devices to only what is essential for regular device operation. For example, it might be okay for network cameras to upload a video file somewhere, but it’s definitely not okay for that camera to then go scanning the Web for other cameras to infect and enlist in DDoS attacks.
So what does all this mean to you? That depends on how many IoT things you and your family and friends are plugging into the Internet and your/their level of knowledge about how to secure and maintain these devices. Here’s a primer on minimizing the chances that your orbit of IoT things become a security liability for you or for the Internet at large.
Twitter just asked all 300+ million users to reset their passwords, citing the exposure of user passwords via a bug that stored passwords in plain text — without protecting them with any sort of encryption technology that would mask a Twitter user’s true password. The social media giant says it has fixed the bug and that so far its investigation hasn’t turned up any signs of a breach or that anyone misused the information. But if you have a Twitter account, please change your account password now.
Or if you don’t trust links in blogs like this (I get it) go to Twitter.com and change it from there. And then come back and read the rest of this. We’ll wait.
In a post to its company blog this afternoon, Twitter CTO Parag Agrawal wrote:
“When you set a password for your Twitter account, we use technology that masks it so no one at the company can see it. We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone.
“Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password. You can change your Twitter password anytime by going to the password settings page.”
Agrawal explains that Twitter normally masks user passwords through a state-of-the-art encryption technology called “bcrypt,” which replaces the user’s password with a random set of numbers and letters that are stored in Twitter’s system.
“This allows our systems to validate your account credentials without revealing your password,” said Agrawal, who says the technology they’re using to mask user passwords is the industry standard.
“Due to a bug, passwords were written to an internal log before completing the hashing process,” he continued. “We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”
Agrawal wrote that while Twitter has no reason to believe password information ever left Twitter’s systems or was misused by anyone, the company is still urging all Twitter users to reset their passwords NOW.
-Change your password on Twitter and on any other service where you may have used the same password.
-Use a strong password that you don’t reuse on other websites.
–Enable login verification, also known as two factor authentication. This is the single best action you can take to increase your account security.
-Use a password manager to make sure you’re using strong, unique passwords everywhere.
This may be much ado about nothing disclosed out of an abundance of caution, or further investigation may reveal different findings. It doesn’t matter for right now: If you’re a Twitter user and if you didn’t take my advice to go change your password yet, go do it now! That is, if you can.
Twitter.com seems responsive now, but some period of time Thursday afternoon Twitter had problems displaying many Twitter profiles, or even its homepage. Just a few moments ago, I tried to visit the Twitter CTO’s profile page and got this (ditto for Twitter.com):
If for some reason you can’t reach Twitter.com, try again soon. Put it on your to-do list or calendar for an hour from now. Seriously, do it now or very soon.
I have sent some more specific questions about this incident in to Twitter. More updates as available.
Update, 8:04 p.m. ET: Went to reset my password at Twitter and it said my new password was strong, but when I submitted it I was led to a dead page. But after logging in again at twitter.com the new password worked (and the old didn’t anymore). Then it prompted me to enter one-time code from app (you do have 2-factor set up on Twitter, right?) Password successfully changed!
Storing passwords in plaintext online is never a good idea, but it’s remarkable how many companies have employees who are doing just that using online collaboration tools like Trello.com. Last week, KrebsOnSecurity notified a host of companies that employees were using Trello to share passwords for sensitive internal resources. Among those put at risk by such activity included an insurance firm, a state government agency and ride-hailing service Uber.com.
By default, Trello boards for both enterprise and personal use are set to either private (requires a password to view the content) or team-visible only (approved members of the collaboration team can view).
But that doesn’t stop individual Trello users from manually sharing personal boards that include proprietary employer data, information that may be indexed by search engines and available to anyone with a Web browser. And unfortunately for organizations, far too many employees are posting sensitive internal passwords and other resources on their own personal Trello boards that are left open and exposed online.
KrebsOnSecurity spent the past week using Google to discover unprotected personal Trello boards that listed employer passwords and other sensitive data. Pictured above was a personal board set up by some Uber developers in the company’s Asia-Pacific region, which included passwords needed to view a host of internal Google Documents and images.
Uber spokesperson Melanie Ensign said the Trello board in question was made private shortly after being notified by this publication, among others.
“We had a handful of members in random parts of the world who didn’t realize they were openly sharing this information,” Ensign said. “We’ve reached out to these teams to remind people that these things need to happen behind internal resources. Employee awareness is an ongoing challenge, but so far we haven’t found any user data on any of the exposed boards. We may have dodged a bullet here, and it definitely could have been worse.”
Ensign said the initial report about the exposed board came through the company’s bug bounty program, and that the person who reported it would receive at least the minimum bounty amount — $500 — for reporting the incident (Uber hasn’t yet decided whether the award should be higher for this incident).
The Uber employees who created the board “used their work email to open a public board that they weren’t supposed to,” Ensign said. “They didn’t go through our enterprise account to create that. We first found out about it through our bug bounty program, and while it’s not technically a vulnerability in our products, it’s certainly something that we would pay for anyway. In this case, we got multiple reports about the same thing, but we always pay the first report we get.”
Of course, not every company has a bug bounty program to incentivize the discovery and private reporting of internal resources that may be inadvertently exposed online.
Screenshots that KrebsOnSecurity took of many far more shocking examples of employees posting dozens of passwords for sensitive internal resources are not pictured here because the affected parties still have not responded to alerts provided by this author.
Trello is one of many online collaboration tools made by Atlassian Corporation PLC, a technology company based in Sydney, Australia. Trello co-founder Michael Pryor said Trello boards are set to private by default and must be manually changed to public by the user.
“We strive to make sure public boards are being created intentionally and have built in safeguards to confirm the intention of a user before they make a board publicly visible,” Pryor said. “Additionally, visibility settings are displayed persistently on the top of every board.”
A Trello spokesperson said the privacy changes were made to bring the company’s policies in line with new EU privacy laws that come into enforcement later this month. But they also clarify that Trello’s enterprise features allow the enterprise admins to control the security and permissions around a work account an employee may have created before the enterprise product was purchased.
Uber spokesperson Ensign called the changes welcome.
“As a result companies will have more security control over Trello boards created by current/former employees and contractors, so we’re happy to see the change,” she said.
On two occasions this past year I’ve published stories here warning about the prospect that new European privacy regulations could result in more spams and scams ending up in your inbox. This post explains in a question and answer format some of the reasoning that went into that prediction, and responds to many of the criticisms leveled against it.
Before we get to the Q&A, a bit of background is in order. On May 25, 2018 the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.
In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).
Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free.
But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups.
The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally.
ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse.
But at an ICANN meeting in San Juan, Puerto Rico last month, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures.
In a series of posts on Twitter, I predicted that the WHOIS changes coming with GDPR will likely result in a noticeable increase in cybercrime — particularly in the form of phishing and other types of spam. In response to those tweets, several authors on Wednesday published an article for Georgia Tech’s Internet Governance Project titled, “WHOIS afraid of the dark? Truth or illusion, let’s know the difference when it comes to WHOIS.”
The following Q&A is intended to address many of the more misleading claims and assertions made in that article.
Cyber criminals don’t use their real information in WHOIS registrations, so what’s the big deal if the data currently available in WHOIS records is no longer in the public domain after May 25?
I can point to dozens of stories printed here — and probably hundreds elsewhere — that clearly demonstrate otherwise. Whether or not cyber crooks do provide their real information is beside the point. ANY information they provide — and especially information that they re-use across multiple domains and cybercrime campaigns — is invaluable to both grouping cybercriminal operations and in ultimately identifying who’s responsible for these activities.
To understand why data reuse in WHOIS records is so common among crooks, put yourself in the shoes of your average scammer or spammer — someone who has to register dozens or even hundreds or thousands of domains a week to ply their trade. Are you going to create hundreds or thousands of email addresses and fabricate as many personal details to make your WHOIS listings that much harder for researchers to track? The answer is that those who take this extraordinary step are by far and away the exception rather than the rule. Most simply reuse the same email address and phony address/phone/contact information across many domains as long as it remains profitable for them to do so.
This pattern of WHOIS data reuse doesn’t just extend across a few weeks or months. Very often, if a spammer, phisher or scammer can get away with re-using the same WHOIS details over many years without any deleterious effects to their operations, they will happily do so. Why they may do this is their own business, but nevertheless it makes WHOIS an incredibly powerful tool for tracking threat actors across multiple networks, registrars and Internet epochs.
All domain registrars offer free or a-la-carte privacy protection services that mask the personal information provided by the domain registrant. Most cybercriminals — unless they are dumb or lazy — are already taking advantage of these anyway, so it’s not clear why masking domain registration for everyone is going to change the status quo by much.
It is true that some domain registrants do take advantage of WHOIS privacy services, but based on countless investigations I have conducted using WHOIS to uncover cybercrime businesses and operators, I’d wager that cybercrooks more often do not use these services. Not infrequently, when they do use WHOIS privacy options there are still gaps in coverage at some point in the domain’s history (such as when a registrant switches hosting providers) which are indexed by historic WHOIS records and that offer a brief window of visibility into the details behind the registration.
This is demonstrably true even for organized cybercrime groups and for nation state actors, and these are arguably some of the most sophisticated and savvy cybercriminals out there.
It’s worth adding that if so many cybercrooks seem nonchalant about adopting WHOIS privacy services it may well be because they reside in countries where the rule of law is not well-established, or their host country doesn’t particularly discourage their activities so long as they’re not violating the golden rule — namely, targeting people in their own backyard. And so they may not particularly care about covering their tracks. Or in other cases they do care, but nevertheless make mistakes or get sloppy at some point, as most cybercriminals do.
The GDPR does not apply to businesses — only to individuals — so there is no reason researchers or anyone else should be unable to find domain registration details for organizations and companies in the WHOIS database after May 25, right?
It is true that the European privacy regulations as they relate to WHOIS records do not apply to businesses registering domain names. However, the domain registrar industry — which operates on razor-thin profit margins and which has long sought to be free from any WHOIS requirements or accountability whatsoever — won’t exactly be tripping over themselves to add more complexity to their WHOIS efforts just to make a distinction between businesses and individuals.
As a result, registrars simply won’t make that distinction because there is no mandate that they must. They’ll just adopt the same WHOIS data collection and display polices across the board, regardless of whether the WHOIS details for a given domain suggest that the registrant is a business or an individual.
But the GDPR only applies to data collected about people in Europe, so why should this impact WHOIS registration details collected on people who are outside of Europe?
Again, domain registrars are the ones collecting WHOIS data, and they are most unlikely to develop WHOIS record collection and dissemination policies that seek to differentiate between entities covered by GDPR and those that may not be. Such an attempt would be fraught with legal and monetary complications that they simply will not take on voluntarily.
What’s more, the domain registrar community tends to view the public display of WHOIS data as a nuisance and a cost center. They have mainly only allowed public access to WHOIS data because ICANN’s contracts state that they should. So, from registrar community’s point of view, the less information they must make available to the public, the better.
Like it or not, the job of tracking down and bringing cybercriminals to justice falls to law enforcement agencies — not security researchers. Law enforcement agencies will still have unfettered access to full WHOIS records.
As it relates to inter-state crimes (i.e, the bulk of all Internet abuse), law enforcement — at least in the United States — is divided into two main components: The investigative side (i.e., the FBI and Secret Service) and the prosecutorial side (the state and district attorneys who actually initiate court proceedings intended to bring an accused person to justice).
Much of the legwork done to provide the evidence needed to convince prosecutors that there is even a case worth prosecuting is performed by security researchers. The reasons why this is true are too numerous to delve into here, but the safe answer is that law enforcement investigators typically are more motivated to focus on crimes for which they can readily envision someone getting prosecuted — and because very often their plate is full with far more pressing, immediate and local (physical) crimes.
Admittedly, this is a bit of a blanket statement because in many cases local, state and federal law enforcement agencies will do this often tedious legwork of cybercrime investigations on their own — provided it involves or impacts someone in their jurisdiction. But due in large part to these jurisdictional issues, politics and the need to build prosecutions around a specific locality when it comes to cybercrime cases, very often law enforcement agencies tend to miss the forest for the trees.
Who cares if security researchers will lose access to WHOIS data, anyway? To borrow an assertion from the Internet Governance article, “maybe it’s high time for security researchers and businesses that harvest personal information from WHOIS on an industrial scale to refine and remodel their research methods and business models.”
This is an alluring argument. After all, the technology and security industries claim to be based on innovation. But consider carefully how anti-virus, anti-spam or firewall technologies currently work. The unfortunate reality is that these technologies are still mostly powered by humans, and those humans rely heavily on access to key details about domain reputation and ownership history.
Those metrics for reputation weigh a host of different qualities, but a huge component of that reputation score is determining whether a given domain or Internet address has been connected to any other previous scams, spams, attacks or other badness. We can argue about whether this is the best way to measure reputation, but it doesn’t change the prospect that many of these technologies will in all likelihood perform less effectively after WHOIS records start being heavily redacted.
Don’t advances in artificial intelligence and machine learning obviate the need for researchers to have access to WHOIS data?
This sounds like a nice idea, but again it is far removed from current practice. Ask anyone who regularly uses WHOIS data to determine reputation or to track and block malicious online threats and I’ll wager you will find the answer is that these analyses are still mostly based on manual lookups and often thankless legwork. Perhaps such trendy technological buzzwords will indeed describe the standard practice of the security industry at some point in the future, but in my experience this does not accurately depict the reality today.
Okay, but Internet addresses are pretty useful tools for determining reputation. The sharing of IP addresses tied to cybercriminal operations isn’t going to be impacted by the GDPR, is it?
That depends on the organization doing the sharing. I’ve encountered at least two cases in the past few months wherein European-based security firms have been reluctant to share Internet address information at all in response to the GDPR — based on a perceived (if not overly legalistic) interpretation that somehow this information also might be considered personally identifying data. This reluctance to share such information out of a concern that doing so might land the sharer in legal hot water can indeed have a chilling effect on the important sharing of threat intelligence across borders.
According to the Internet Governance article, “If you need to get in touch with a website’s administrator, you will be able to do so in what is a less intrusive manner of achieving this purpose: by using an anonymized email address, or webform, to reach them (The exact implementation will depend on the registry). If this change is inadequate for your ‘private detective’ activities and you require full WHOIS records, including the personal information, then you will need to declare to a domain name registry your specific need for and use of this personal information. Nominet, for instance, has said that interested parties may request the full WHOIS record (including historical data) for a specific domain and get a response within one business day for no charge.”
I’m sure this will go over tremendously with both the hacked sites used to host phishing and/or malware download pages, as well as those phished by or served with malware in the added time it will take to relay and approve said requests.
According to a Q3 2017 study (PDF) by security firm Webroot, the average lifespan of a phishing site is between four and eight hours. How is waiting 24 hours before being able to determine who owns the offending domain going to be helpful to either the hacked site or its victims? It also doesn’t seem likely that many other registrars will volunteer for this 24-hour turnaround duty — and indeed no others have publicly demonstrated any willingness to take on this added cost and hassle.
I’ve heard that ICANN is pushing for a delay in the GDPR as it relates to WHOIS records, to give the registrar community time to come up with an accreditation system that would grant vetted researchers access to WHOIS records. Why isn’t that a good middle ground?
It might be if ICANN hadn’t dragged its heels in taking GDPR seriously until perhaps the past few months. As it stands, the experts I’ve interviewed see little prospect for such a system being ironed out or in gaining necessary traction among the registrar community to accomplish this anytime soon. And most experts I’ve interviewed predict it is likely that the Internet community will still be debating about how to create such an accreditation system a year from now.
Hence, it’s not likely that WHOIS records will continue to be anywhere near as useful to researchers in a month or so than they were previously. And this reality will continue for many months to come — if indeed some kind of vetted WHOIS access system is ever envisioned and put into place.
After I registered a domain name using my real email address, I noticed that address started receiving more spam emails. Won’t hiding email addresses in WHOIS records reduce the overall amount of spam I can expect when registering a domain under my real email address?
That depends on whether you believe any of the responses to the bolded questions above. Will that address be spammed by people who try to lure you into paying them to register variations on that domain, or to entice you into purchasing low-cost Web hosting services from some random or shady company? Probably. That’s exactly what happens to almost anyone who registers a domain name that is publicly indexed in WHOIS records.
The real question is whether redacting all email addresses from WHOIS will result in overall more bad stuff entering your inbox and littering the Web, thanks to reputation-based anti-spam and anti-abuse systems failing to work as well as they did before GDPR kicks in.
It’s worth noting that ICANN created a working group to study this exact issue, which noted that “the appearance of email addresses in response to WHOIS queries is indeed a contributor to the receipt of spam, albeit just one of many.” However, the report concluded that “the Committee members involved in the WHOIS study do not believe that the WHOIS service is the dominant source of spam.”
Do you have something against people not getting spammed, or against better privacy in general?
To the contrary, I have worked the majority of my professional career to expose those who are doing the spamming and scamming. And I can say without hesitation that an overwhelming percentage of that research has been possible thanks to data included in public WHOIS registration records.
Is the current WHOIS system outdated, antiquated and in need of an update? Perhaps. But scrapping the current system without establishing anything in between while laboring under the largely untested belief that in doing so we will achieve some kind of privacy utopia seems myopic.
If opponents of the current WHOIS system are being intellectually honest, they will make the following argument and stick to it: By restricting access to information currently available in the WHOIS system, whatever losses or negative consequences on security we may suffer as a result will be worth the cost in terms of added privacy. That’s an argument I can respect, if not agree with.
But for the most part that’s not the refrain I’m hearing. Instead, what this camp seems to be saying is if you’re not on board with the WHOIS changes that will be brought about by the GDPR, then there must be something wrong with you, and in any case here a bunch of thinly-sourced reasons why the coming changes might not be that bad.
Authorities in the U.S., U.K. and the Netherlands on Tuesday took down popular online attack-for-hire service WebStresser.org and arrested its alleged administrators. Investigators say that prior to the takedown, the service had more than 136,000 registered users and was responsible for launching somewhere between four and six million attacks over the past three years.
The action, dubbed “Operation Power Off,” targeted WebStresser.org (previously Webstresser.co), one of the most active services for launching point-and-click distributed denial-of-service (DDoS) attacks. WebStresser was one of many so-called “booter” services — virtual hired muscle that anyone can rent to knock nearly any website or Internet user offline.
“The damage of these attacks is substantial,” reads a statement from the Dutch National Police in a Reddit thread about the takedown.”Victims are out of business for a period of time, and spend money on mitigation and on (other) security measures.”
In a separate statement released this morning, Europol — the law enforcement agency of the European Union — said “further measures were taken against the top users of this marketplace in the Netherlands, Italy, Spain, Croatia, the United Kingdom, Australia, Canada and Hong Kong.” The servers powering WebStresser were located in Germany, the Netherlands and the United States, according to Europol.
The U.K.’s National Crime Agency said WebStresser could be rented for as little as $14.99, and that the service allowed people with little or no technical knowledge to launch crippling DDoS attacks around the world.
Neither the Dutch nor U.K. authorities would say who was arrested in connection with this takedown. But according to information obtained by KrebsOnSecurity, the administrator of WebStresser allegedly was a young man in Serbia named Jovan Mirkovic.
Mirkovic, who went by the hacker nickname “m1rk,” also used the alias “Mirkovik Babs” on Facebook where for years he openly discussed his role in programming and ultimately running WebStresser. The last post on Mirkovic’s Facebook page, dated April 3 (the day before the takedown), shows the young hacker sipping what appears to be liquor while bathing. Below that image are dozens of comments left in the past few hours, most of them simply, “RIP.”
Tuesday’s action against WebStresser is the latest such takedown to target both owners and customers of booter services. Many booter service operators apparently believe (or at least hide behind) a wordy “terms of service” agreement that all customers must acknowledge, under the assumption that somehow this absolves them of any sort of liability for how their customers use the service — regardless of how much hand-holding and technical support booter service administrators offer customers.
In October the FBI released an advisory warning that the use of booter services — also called “stressers” — is punishable under the Computer Fraud and Abuse Act, and may result in arrest and criminal prosecution.
In 2016, authorities in Israel arrested two 18-year-old men accused of running vDOS, until then the most popular and powerful booter service on the market. Their arrests came within hours of a story at KrebsOnSecurity that named the men and detailed how their service had been hacked.
Many in the hacker community have criticized authorities for targeting booter service administrators and users and for not pursuing what they perceive as more serious cybercriminals, noting that the vast majority of both groups are young men under the age of 21. In its Reddit thread, the Dutch Police addressed this criticism head-on, saying Dutch authorities are working on a new legal intervention called “Hack_Right,” a diversion program intended for first-time cyber offenders.
“Prevention of re-offending by offering a combination of restorative justice, training, coaching and positive alternatives is the main aim of this project,” the Dutch Police wrote. “See page 24 of the 5th European Cyber Security Perspectives and stay tuned on our THTC twitter account #HackRight! AND we are working on a media campaign to prevent youngsters from starting to commit cyber crimes in the first place. Expect a launch soon.”
In the meantime, it’s likely we’ll sooner see the launch of yet another booter service. According to reviews and sales threads at stresserforums[dot]net — a marketplace for booter buyers and sellers — there are dozens of other booter services in operations, with new ones coming online almost every month.
MEDantex, a Kansas-based company that provides medical transcription services for hospitals, clinics and private physicians, took down its customer Web portal last week after being notified by KrebsOnSecurity that it was leaking sensitive patient medical records — apparently for thousands of physicians.
On Friday, KrebsOnSecurity learned that the portion of MEDantex’s site which was supposed to be a password-protected portal physicians could use to upload audio-recorded notes about their patients was instead completely open to the Internet.
What’s more, numerous online tools intended for use by MEDantex employees were exposed to anyone with a Web browser, including pages that allowed visitors to add or delete users, and to search for patient records by physician or patient name. No authentication was required to access any of these pages.
Several MEDantex portal pages left exposed to the Web suggest that the company recently was the victim of WhiteRose, a strain of ransomware that encrypts a victim’s files unless and until a ransom demand is paid — usually in the form of some virtual currency such as bitcoin.
Contacted by KrebsOnSecurity, MEDantex founder and chief executive Sreeram Pydah confirmed that the Wichita, Kansas based transcription firm recently rebuilt its online servers after suffering a ransomware infestation. Pydah said the MEDantex portal was taken down for nearly two weeks, and that it appears the glitch exposing patient records to the Web was somehow incorporated into that rebuild.
“There was some ransomware injection [into the site], and we rebuilt it,” Pydah said, just minutes before disabling the portal (which remains down as of this publication). “I don’t know how they left the documents in the open like that. We’re going to take the site down and try to figure out how this happened.”
It’s unclear exactly how many patient records were left exposed on MEDantex’s site. But one of the main exposed directories was named “/documents/userdoc,” and it included more than 2,300 physicians listed alphabetically by first initial and last name. Drilling down into each of these directories revealed a varying number of patient records — displayed and downloadable as Microsoft Word documents and/or raw audio files.
Although many of the exposed documents appear to be quite recent, some of the records dated as far back as 2007. It’s also unclear how long the data was accessible, but this Google cache of the MEDantex physician portal seems to indicate it was wide open on April 10, 2018.
Among the clients listed on MEDantex’s site include New York University Medical Center; San Francisco Multi-Specialty Medical Group; Jackson Hospital in Montgomery Ala.; Allen County Hospital in Iola, Kan; Green Clinic Surgical Hospital in Ruston, La.; Trillium Specialty Hospital in Mesa and Sun City, Ariz.; Cooper University Hospital in Camden, N.J.; Sunrise Medical Group in Miami; the Wichita Clinic in Wichita, Kan.; the Kansas Spine Center; the Kansas Orthopedic Center; and Foundation Surgical Hospitals nationwide. MEDantex’s site states these are just some of the healthcare organizations partnering with the company for transcription services.
Unfortunately, the incident at MEDantex is far from an anomaly. A study of data breaches released this month by Verizon Enterprise found that nearly a quarter of all breaches documented by the company in 2017 involved healthcare organizations.
Verizon says ransomware attacks account for 85 percent of all malware in healthcare breaches last year, and that healthcare is the only industry in which the threat from the inside is greater than that from outside.
“Human error is a major contributor to those stats,” the report concluded.
According to a story at BleepingComputer, a security news and help forum that specializes in covering ransomware outbreaks, WhiteRose was first spotted about a month ago. BleepingComputer founder Lawrence Abrams says it’s not clear how this ransomware is being distributed, but that reports indicate it is being manually installed by hacking into Remote Desktop services.
Fortunately for WhiteRose victims, this particular strain of ransomware is decryptable without the need to pay the ransom.
“The good news is this ransomware appears to be decryptable by Michael Gillespie,” Abrams wrote. “So if you become infected with WhiteRose, do not pay the ransom, and instead post a request for help in our WhiteRose Support & Help topic.”
Ransomware victims may also be able to find assistance in unlocking data without paying from nomoreransom.org.
KrebsOnSecurity would like to thank India-based cybersecurity startup Banbreach for the heads up about this incident.
Facebook has built some of the most advanced algorithms for tracking users, but when it comes to acting on user abuse reports about Facebook groups and content that clearly violate the company’s “community standards,” the social media giant’s technology appears to be woefully inadequate.
Last week, Facebook deleted almost 120 groups totaling more than 300,000 members. The groups were mostly closed — requiring approval from group administrators before outsiders could view the day-to-day postings of group members.
However, the titles, images and postings available on each group’s front page left little doubt about their true purpose: Selling everything from stolen credit cards, identities and hacked accounts to services that help automate things like spamming, phishing and denial-of-service attacks for hire.
To its credit, Facebook deleted the groups within just a few hours of KrebsOnSecurity sharing via email a spreadsheet detailing each group, which concluded that the average length of time the groups had been active on Facebook was two years. But I suspect that the company took this extraordinary step mainly because I informed them that I intended to write about the proliferation of cybercrime-based groups on Facebook.
That story, Deleted Facebook Cybercrime Groups had 300,000 Members, ended with a statement from Facebook promising to crack down on such activity and instructing users on how to report groups that violate it its community standards.
In short order, some of the groups I reported that were removed re-established themselves within hours of Facebook’s action. I decided instead of contacting Facebook’s public relations arm directly that I would report those resurrected groups and others using Facebook’s stated process. Roughly two days later I received a series replies saying that Facebook had reviewed my reports but that none of the groups were found to have violated its standards. Here’s a snippet from those replies:
Perhaps I should give Facebook the benefit of the doubt: Maybe my multiple reports one after the other triggered some kind of anti-abuse feature that is designed to throttle those who would seek to abuse it to get otherwise legitimate groups taken offline — much in the way that pools of automated bot accounts have been known to abuse Twitter’s reporting system to successfully sideline accounts of specific targets.
Or it could be that I simply didn’t click the proper sequence of buttons when reporting these groups. The closest match I could find in Facebook’s abuse reporting system were, “Doesn’t belong on Facebook,” and “Purchase or sale of drugs, guns or regulated products.” There was/is no option for “selling hacked accounts, credit cards and identities,” or anything of that sort.
In any case, one thing seems clear: Naming and shaming these shady Facebook groups via Twitter seems to work better right now for getting them removed from Facebook than using Facebook’s own formal abuse reporting process. So that’s what I did on Thursday. Here’s an example:
That group, too, was nixed shortly after my tweet. And so it went for other groups I mentioned in my tweetstorm today. But in response to that flurry of tweets about abusive groups on Facebook, I heard from dozens of other Twitter users who said they’d received the same “does not violate our community standards” reply from Facebook after reporting other groups that clearly flouted the company’s standards.
Pete Voss, Facebook’s communications manager, apologized for the oversight.
“We’re sorry about this mistake,” Voss said. “Not removing this material was an error and we removed it as soon as we investigated. Our team processes millions of reports each week, and sometimes we get things wrong. We are reviewing this case specifically, including the user’s reporting options, and we are taking steps to improve the experience, which could include broadening the scope of categories to choose from.”
Facebook CEO and founder Mark Zuckerberg testified before Congress last week in response to allegations that the company wasn’t doing enough to halt the abuse of its platform for things like fake news, hate speech and terrorist content. It emerged that Facebook already employs 15,000 human moderators to screen and remove offensive content, and that it plans to hire another 5,000 by the end of this year.
“But right now, those moderators can only react to posts Facebook users have flagged,” writes Will Knight, for Technologyreview.com.
Zuckerberg told lawmakers that Facebook hopes expected advances in artificial intelligence or “AI” technology will soon help the social network do a better job self-policing against abusive content. But for the time being, as long as Facebook mainly acts on abuse reports only when it is publicly pressured to do so by lawmakers or people with hundreds of thousands of followers, the company will continue to be dogged by a perception that doing otherwise is simply bad for its business model.
In 2016, KrebsOnSecurity exposed a network of phony Web sites and fake online reviews that funneled those seeking help for drug and alcohol addiction toward rehab centers that were secretly affiliated with the Church of Scientology. Not long after the story ran, that network of bogus reviews disappeared from the Web. Over the past few months, however, the same prolific purveyor of these phantom sites and reviews appears to be back at it again, enlisting the help of Internet users and paying people $25-$35 for each fake listing.
Sometime in March 2018, ads began appearing on Craigslist promoting part-time “social media assistant” jobs, in which interested applicants are directed to sign up for positions at seorehabs[dot]com. This site promotes itself as “leaders in addiction recovery consulting,” explaining that assistants can earn a minimum of $25 just for creating individual Google for Business listings tied to a few dozen generic-sounding addiction recovery center names, such as “Integra Addiction Center,” and “First Exit Recovery.”
Applicants who sign up are given detailed instructions on how to step through Google’s anti-abuse process for creating listings, which include receiving a postcard via snail mail from Google that contains a PIN which needs to be entered at Google’s site before a listing can be created.
Assistants are cautioned not to create more than two listings per street address, but otherwise to use any U.S.-based street address and to leave blank the phone number and Web site for the new business listing.
In my story Scientology Seeks Captive Converts Via Google Maps, Drug Rehab Centers, I showed how a labyrinthine network of fake online reviews that steered Internet searches toward rehab centers funded by Scientology adherents was set up by TopSeek Inc., which bills itself as a collection of “local marketing experts.” According to LinkedIn, TopSeek is owned by John Harvey, an individual (or alias) who lists his address variously as Sacramento, Calif. and Hawaii.
Although the current Web site registration records from registrar giant Godaddy obscure the information for the current owner of seorehabs[dot]com, a historic WHOIS search via Domaintools shows the site was also registered by John Harvey and TopSeek in 2015. Mr. Harvey did not respond to requests for comment. [Full disclosure: Domaintools previously was an advertiser on KrebsOnSecurity].
TopSeek’s Web site says it works with several clients, but most especially Narconon International — an organization that promotes the rather unorthodox theories of Scientology founder L. Ron Hubbard regarding substance abuse treatment and addiction.
As described in Narconon’s Wikipedia entry, Narconon facilities are known not only for attempting to win over new converts to Scientology, but also for treating all substance abuse addictions with a rather bizarre cocktail consisting mainly of vitamins and long hours in extremely hot saunas. Their Wiki entry documents multiple cases of accidental deaths at Narconon facilities, where some addicts reportedly died from overdoses of vitamins or neglect.A LUCRATIVE RACKET
Bryan Seely, a security expert who has written extensively about the use of fake search listings to conduct online bait-and-switch scams, said the purpose of sites like those that Seorehabs pays people to create is to funnel calls to a handful of switchboards that then sell the leads to rehab centers that have agreed to pay for them. Many rehab facilities will pay hundreds of dollars for leads that may ultimately lead to a new patient. After all, Seely said, some facilities can then turn around and bill insurance providers for thousands of dollars per patient.
Perhaps best known for a stunt in which he used fake Google Maps listings to intercept calls destined for the FBI and U.S. Secret Service, Seely has learned a thing or two about this industry: Until 2011, he worked for an SEO firm that helped to develop and spread some of the same fake online reviews that he is now helping to clean up.
“Mr. Harvey and TopSeek are crowdsourcing the data input for these fake rehab centers,” Seely said. “The phone numbers all go to just a few dedicated call centers, and it’s not hard to see why. The money is good in this game. He sells a call for $50-$100 at a minimum, and the call center then tries to sell that lead to a treatment facility that has agreed to buy leads. Each lead can be worth $5,000 to $10,000 for a patient who has good health insurance and signs up.”
Many of the listings created by Seorehab assistants are tied to fake Google Maps entries that include phony reviews for bogus treatment centers. In the event those listings get suspended by Google, Seorehab offers detailed instructions on how assistants can delete and re-submit listings.
Assistants also can earn extra money writing fake, glowing reviews of the treatment centers:
Below are some of the plainly bogus reviews and listings created in the last month that pimp the various treatment center names and Web sites provided by Seorehabs. It is not difficult to find dozens of other examples of people who claim to have been at multiple Seorehab-promoted centers scattered across the country. For example, “Gloria Gonzalez” supposedly has been treated at no fewer than seven Seorehab-marketed detox locations in five states, penning each review just in the last month:
A reviewer using the name “Tedi Spicer” also promoted at least seven separate rehab centers across the United States in the past month. Getting treated at so many far-flung facilities in just the few months that the domains for these supposed rehab centers have been online would be an impressive feat:
Bring up any of the Web sites for these supposed rehab listings and you’ll notice they all include the same boilerplate text and graphic design. Aside from combing listings created by the reviewers paid to promote the sites, we can find other Seorehab listings just by searching the Web for chunks of text on the sites. Doing so reveals a long list (this is likely far from comprehensive) of domain names registered in the past few months that were all created with hidden registration details and registered via Godaddy.
Seely said he spent a few hours this week calling dozens of phone numbers tied to these rehab centers promoted by TopSeek, and created a spreadsheet documenting his work and results here (Google Sheets).
Seely said while he would never advocate such activity, TopSeek’s fake listings could end up costing Mr. Harvey plenty of money if someone figured out a way to either mass-report the listings as fraudulent or automate calls to the handful of hotlines tied to the listings.
“It would kill his business until he changes all the phone numbers tied to these fake listings, but if he had to do that he’d have to pay people to rebuild all the directories that link to these sites,” he said.WHAT YOU CAN DO ABOUT FAKE ONLINE REVIEWS
Before doing business with a company you found online, don’t just pick the company that comes up at the top of search results on Google or any other search engine. Unfortunately, that generally guarantees little more than the company is good at marketing.
Take the time to research the companies you wish to hire before booking them for jobs or services — especially when it comes to big, expensive, and potentially risky services like drug rehab or moving companies. By the way, if you’re looking for a legitimate rehab facility, you could do worse than to start at rehabs.com, a legitimate rehab search engine.
It’s a good idea to get in the habit of verifying that the organization’s physical address, phone number and Web address shown in the search result match that of the landing page. If the phone numbers are different, use the contact number listed on the linked site.
Take the time to learn about the organization’s reputation online and in social media; if it has none (other than a Google Maps listing with all glowing, 5-star reviews), it’s probably fake. Search the Web for any public records tied to the business’ listed physical address, including articles of incorporation from the local secretary of state office online.
A search of the company’s domain name registration records can give you an idea of how long its Web site has been in business, as well as additional details about the the organization (although the ability to do this may soon be a thing of the past).
Seely said one surefire way to avoid these marketing shell games is to ask a simple question of the person who answers the phone in the online listing.
“Ask anyone on the phone what company they’re with,” Seely said. “Have them tell you, take their information and then call them back. If they aren’t forthcoming about who they are, they’re most likely a scam.”
In 2016, Sealy published a book on Amazon about the thriving and insanely lucrative underground business of fake online reviews. He’s agreed to let KrebsOnSecurity republish the entire e-book, which is available for free at this link (PDF).
“This is literally the worst book ever written about Google Maps fraud,” Seely said. “It’s also the best. Is it still a niche if I’m the only one here? The more people who read it, the better.”
Hours after being alerted by KrebsOnSecurity, Facebook last week deleted almost 120 private discussion groups totaling more than 300,000 members who flagrantly promoted a host of illicit activities on the social media network’s platform. The scam groups facilitated a broad spectrum of shady activities, including spamming, wire fraud, account takeovers, phony tax refunds, 419 scams, denial-of-service attack-for-hire services and botnet creation tools. The average age of these groups on Facebook’s platform was two years.
On Thursday, April 12, KrebsOnSecurity spent roughly two hours combing Facebook for groups whose sole purpose appeared to be flouting the company’s terms of service agreement about what types of content it will or will not tolerate on its platform.
My research centered on groups whose singular focus was promoting all manner of cyber fraud, but most especially those engaged in identity theft, spamming, account takeovers and credit card fraud. Virtually all of these groups advertised their intent by stating well-known terms of fraud in their group names, such as “botnet helpdesk,” “spamming,” “carding” (referring to credit card fraud), “DDoS” (distributed denial-of-service attacks), “tax refund fraud,” and account takeovers.
Each of these closed groups solicited new members to engage in a variety of shady activities. Some had existed on Facebook for up to nine years; approximately ten percent of them had plied their trade on the social network for more than four years.
Here is a spreadsheet (PDF) listing all of the offending groups reported, including: Their stated group names; the length of time they were present on Facebook; the number of members; whether the group was promoting a third-party site on the dark or clear Web; and a link to the offending group. A copy of the same spreadsheet in .csv format is available here.
The biggest collection of groups banned last week were those promoting the sale and use of stolen credit and debit card accounts. The next largest collection of groups included those facilitating account takeovers — methods for mass-hacking emails and passwords for countless online accounts such Amazon, Google, Netflix, PayPal, as well as a host of online banking services.
In a statement to KrebsOnSecurity, Facebook pledged to be more proactive about policing its network for these types of groups.
“We thank Mr. Krebs for bringing these groups to our attention, we removed them as soon as we investigated,” said Pete Voss, Facebook’s communications director. “We investigated these groups as soon as we were aware of the report, and once we confirmed that they violated our Community Standards, we disabled them and removed the group admins. We encourage our community to report anything they see that they don’t think should be in Facebook, so we can take swift action.”
KrebsOnSecurity’s research was far from exhaustive: For the most part, I only looked at groups that promoted fraudulent activities in the English language. Also, I ignored groups that had fewer than 25 members. As such, there may well be hundreds or thousands of other groups who openly promote fraud as their purpose of membership but which achieve greater stealth by masking their intent with variations on or mispellings of different cyber fraud slang terms.
Facebook said its community standards policy does not allow the promotion or sale of illegal goods or services including credit card numbers or CVV numbers (stolen card details marketed for use in online fraud), and that once a violation is reported, its teams review a report and remove the offending post or group if it violates those policies.
The company added that Facebook users can report suspected violations by loading a group’s page, clicking “…” in the top right and selecting “Report Group”. Users who wish to learn more about reporting abusive groups can visit facebook.com/report.
“As technology improves, we will continue to look carefully at other ways to use automation,” Facebook’s statement concludes, responding to questions from KrebsOnSecurity about what steps it might take to more proactively scour its networks for abusive groups. “Of course, a lot of the work we do is very contextual, such as determining whether a particular comment is hateful or bullying. That’s why we have real people looking at those reports and making the decisions.”
Facebook’s stated newfound interest in cleaning up its platform comes as the social networking giant finds itself reeling from a scandal in which Cambridge Analytica, a political data firm, was found to have acquired access to private data on more than 50 million Facebook profiles — most of them scraped without user permission.
The Internal Revenue Service has been urging tax preparation firms to step up their cybersecurity efforts this year, warning that identity thieves and hackers increasingly are targeting certified public accountants (CPAs) in a bid to siphon oodles of sensitive personal and financial data on taxpayers. This is the story of a CPA in New Jersey whose compromise by malware led to identity theft and phony tax refund requests filed on behalf of his clients.
Last month, KrebsOnSecurity was alerted by security expert Alex Holden of Hold Security about a malware gang that appears to have focused on CPAs. The crooks in this case were using a Web-based keylogger that recorded every keystroke typed on the target’s machine, and periodically uploaded screenshots of whatever was being displayed on the victim’s computer screen at the time.
If you’ve never seen one of these keyloggers in action, viewing their output can be a bit unnerving. This particular malware is not terribly sophisticated, but nevertheless is quite effective. It not only grabs any data the victim submits into Web-based forms, but also captures any typing — including backspaces and typos as we can see in the screenshot below.
Whoever was running this scheme had all victim information uploaded to a site that was protected from data scraping by search engines, but the site itself did not require any form of authentication to view data harvested from victim PCs. Rather, the stolen information was indexed by victim and ordered by day, meaning anyone who knew the right URL could view each day’s keylogging record as one long image file.
Those records suggest that this particular CPA — “John,” a New Jersey professional whose real name will be left out of this story — likely had his computer compromised sometime in mid-March 2018 (at least, this is as far back as the keylogging records go for John).
It’s also not clear exactly which method the thieves used to get malware on John’s machine. Screenshots for John’s account suggest he routinely ignored messages from Microsoft and other third party Windows programs about the need to apply critical security updates.
More likely, however, John’s computer was compromised by someone who sent him a booby-trapped email attachment or link. When one considers just how frequently CPAs must need to open Microsoft Office and other files submitted by clients and potential clients via email, it’s not hard to imagine how simple it might be for hackers to target and successfully compromise your average CPA.
The keylogging malware itself appears to have been sold (or perhaps directly deployed) by a cybercriminal who uses the nickname ja_far. This individual markets a $50 keylogger product alongside a malware “crypting” service that guarantees his malware will be undetected by most antivirus products for a given number of days after it is used against a victim.
It seems likely that ja_far’s keylogger was the source of this data because at one point — early in the morning John’s time — the attacker appears to have accidentally pasted ja_far’s jabber instant messenger address into the victim’s screen instead of his own. In all likelihood, John’s assailant was seeking additional crypting services to ensure the keylogger remained undetected on John’s PC. A couple of minutes later, the intruder downloaded a file to John’s PC from file-sharing site sendspace.com.
What I found remarkable about John’s situation was despite receiving notice after notice that the IRS had rejected many of his clients’ tax returns because those returns had already been filed by fraudsters, for at least two weeks John does not appear to have suspected that his compromised computer was likely the source of said fraud inflicted on his clients (or if he did, he didn’t share this notion with any of his friends or family via email).
Instead, John composed and distributed to his clients a form letter about their rejected returns, and another letter that clients could use to alert the IRS and New Jersey tax authorities of suspected identity fraud.
Then again, perhaps John ultimately did suspect that someone had commandeered his machine, because on March 30 he downloaded and installed Spyhunter 4, a security product by Enigma Software designed to detect spyware, keyloggers and rootkits, among other malicious software.
Spyhunter appears to have found ja_far’s keylogger, because shortly after the malware alert pictured above popped up on John’s screen, the Web-based keylogging service stopped recording logs from his machine. John did not respond to requests for comment (via phone).
It’s unlikely John’s various clients who experience(d) identity fraud, tax refund fraud or account takeovers as a result of his PC infection will ever learn the real reason for the fraud. I opted to keep his name out of this story because I thought the experience documented and explained here would be eye opening enough and I have no particular interest in ruining his business.
But a new type of identity theft that the IRS first warned about this year involving CPAs would be very difficult for a victim CPA to conceal. Identity thieves who specialize in tax refund fraud have been busy of late hacking online accounts at multiple tax preparation firms and using them to file phony refund requests. Once the IRS processes the return and deposits money into bank accounts of the hacked firms’ clients, the crooks contact those clients posing as a collection agency and demand that the money be “returned.”
If you go to file your taxes electronically this year and the return is rejected, it may mean fraudsters have beat you to it. The IRS advises taxpayers in this situation to follow the steps outlined in the Taxpayer Guide to Identity Theft. Those unable to file electronically should mail a paper tax return along with Form 14039 (PDF) — the Identity Theft Affidavit — stating they were victims of a tax preparer data breach.
Tax professionals might consider using something other than Microsoft Windows to manage their client’s data. I’ve long dispensed this advice for people in charge of handling payroll accounts for small- to mid-sized businesses. I continue to stand by this advice not because there isn’t malware that can infect Mac or Linux-based systems, but because the vast majority of malicious software out there today still targets Windows computers, and you don’t have to outrun the bear — only the next guy.
Many readers involved in handling corporate payroll accounts have countered that this advice is impractical for people who rely on multiple Windows-based programs to do their jobs. These days, however, most systems and services needed to perform accounting (and CPA) tasks can be used across multiple operating systems — mainly because they are now Web-based and rely instead on credentials entered at some cloud service (e.g., UltraTax, QuickBooks, or even Microsoft’s Office 365).
Naturally, users still must be on guard against phishing scams that try to trick people into divulging credentials to these services, but when your entire business of managing other people’s money and identities can be undone by a simple keylogger, it’s a good idea to do whatever you can to keep from becoming the next malware victim.
According to the IRS, fraudsters are using spear phishing attacks to compromise computers of tax pros. In this scheme, the “criminal singles out one or more tax preparers in a firm and sends an email posing as a trusted source such as the IRS, a tax software provider or a cloud storage provider. Thieves also may pose as clients or new prospects. The objective is to trick the tax professional into disclosing sensitive usernames and passwords or to open a link or attachment that secretly downloads malware enabling the thieves to track every keystroke.”
The IRS warns that some tax professionals may be unaware they are victims of data theft, even long after all of their clients’ data has been stolen by digital intruders. Here are some signs there might be a problem:
- Client e-filed returns begin to be rejected because returns with their Social Security numbers were already filed;
- The number of returns filed with tax practitioner’s Electronic Filing Identification Number (EFIN) exceeds number of clients;
- Clients who haven’t filed tax returns begin to receive authentication letters from the IRS;
- Network computers running slower than normal;
- Computer cursors moving or changing numbers without touching the keyboard;
- Network computers locking out tax practitioners.
Adobe and Microsoft each released critical fixes for their products today, a.k.a “Patch Tuesday,” the second Tuesday of every month. Adobe updated its Flash Player program to resolve a half dozen critical security holes. Microsoft issued updates to correct at least 65 security vulnerabilities in Windows and associated software.
The Microsoft updates impact many core Windows components, including the built-in browsers Internet Explorer and Edge, as well as Office, the Microsoft Malware Protection Engine, Microsoft Visual Studio and Microsoft Azure.
The Malware Protection Engine flaw is one that was publicly disclosed earlier this month, and one for which Redmond issued an out-of-band (outside of Patch Tuesday) update one week ago.
That flaw, discovered and reported by Google’s Project Zero program, is reportedly quite easy to exploit and impacts the malware scanning capabilities for a variety of Microsoft anti-malware products, including Windows Defender, Microsoft Endpoint Protection and Microsoft Security Essentials.
Microsoft really wants users to install these updates as qucikly as possible, but it might not be the worst idea to wait a few days before doing so: Quite often, problems with patches that may cause systems to end up in an endless reboot loop are reported and resolved with subsequent updates within a few days after their release. However, depending on which version of Windows you’re using it may be difficult to put off installing these patches.
Microsoft says by default, Windows 10 receives updates automatically, “and for customers running previous versions, we recommend they turn on automatic updates as a best practice.” Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update. In any case, don’t put off installing these updates too long.
Adobe’s Flash Player update fixes at least two critical bugs in the program. Adobe said it is not aware of any active exploits in the wild against either flaw, but if you’re not using Flash routinely for many sites, you probably want to disable or remove this buggy program.
Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability. Google Chrome also bundles Flash, but blocks it from running on all but a handful of popular sites, and then only after user approval.
For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis. Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits.
The latest standalone version of Flash that addresses these bugs is 184.108.40.206 for Windows, Mac, Linux and Chrome OS. But most users probably would be better off manually hobbling or removing Flash altogether, since so few sites actually require it still. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.
As always, if you experience problems installing any of these updates, feel free to note your issues in the comments below. Chances are, another reader here has experienced something similar and can assist in troubleshooting the issue.
Social media sites are littered with seemingly innocuous little quizzes, games and surveys urging people to reminisce about specific topics, such as “What was your first job,” or “What was your first car?” The problem with participating in these informal surveys is that in doing so you may be inadvertently giving away the answers to “secret questions” that can be used to unlock access to a host of your online identities and accounts.
I’m willing to bet that a good percentage of regular readers here would never respond — honestly or otherwise — to such questionnaires (except perhaps to chide others for responding). But I thought it was worth mentioning because certain social networks — particularly Facebook — seem positively overrun with these data-harvesting schemes. What’s more, I’m constantly asking friends and family members to stop participating in these quizzes and to stop urging their contacts to do the same.
On the surface, these simple questions may be little more than an attempt at online engagement by otherwise well-meaning companies and individuals. Nevertheless, your answers to these questions may live in perpetuity online, giving identity thieves and scammers ample ammunition to start gaining backdoor access to your various online accounts.
Consider, for example, the following quiz posted to Facebook by San Benito Tire Pros, a tire and auto repair shop in California. It asks Facebook users, “What car did you learn to drive stick shift on?”
I hope this is painfully obvious, but for many people the answer will be the same as to the question, “What was the make and model of your first car?”, which is one of several “secret questions” most commonly used by banks and other companies to let customers reset their passwords or gain access to the account without knowing the password.
Probably the most well-known and common secret question, “what was the name of your first pet,” comes up in a number of Facebook quizzes that, incredibly, thousands of people answer willingly and (apparently) truthfully. When I saw this one I was reminded of this hilarious 2007 Daily Show interview wherein Jon Stewart has Microsoft co-founder Bill Gates on and tries to slyly ask him the name of his first pet.
Womenworking.com asked a variation on this same question of their huge Facebook following and received an impressive number of responses:
Here’s a great one from springchicken.co.uk, an e-commerce site in the United Kingdom. It asks users to publicly state the answer to yet another common secret question: “What street did you grow up on?”
This question, from the Facebook account of Rving.how — a site for owners of recreational vehicles — asks: “What was your first job?” How the answer to this question might possibly relate to RV camping is beyond me, but that didn’t stop people from responding.
The question, “What was your high school mascot” is another common secret question, and yet you can find this one floating around lots of Facebook profiles:
Among the most common secret questions is, “Where did you meet your spouse or partner?” Loads of people like to share this information online as well, it seems:
Here’s another gem from the Womenworking Facebook page. Who hasn’t had to use the next secret question at some point? Answering this truthfully — in a Facebook quiz or on your profile somewhere — is a bad idea.
Do you remember your first grade teacher’s name? Don’t worry, if you forget it after answering this question, Facebook will remember it for you:
I’ve never seen a “what was the first concert you ever saw” secret question, but it is unique as secret questions go and I wouldn’t be surprised if some companies use this one. “What is your favorite band?” is definitely a common secret question, however:
Giving away information about yourself, your likes and preferences, etc., can lead to all kinds of unexpected consequences. This practice may even help turn the tide of elections. Just take the ongoing scandal involving Cambridge Analytica, which reportedly collected data on more than 50 million Facebook users without their consent and then used this information to build behavioral models to target potential voters in various political campaigns.
I hope readers don’t interpret this story as KrebsOnSecurity endorsing secret questions as a valid form of authentication. In fact, I have railed against this practice for years, precisely because the answers often are so easily found using online services and social media profiles.
But if you must patronize a company or service that forces you to select secret questions, I think it’s a really good idea not to answer them truthfully. Just make sure you have a method for remembering your phony answer, in case you forget the lie somewhere down the road.
Many thanks to RonM for assistance with this post.
The U.S. Secret Service is warning financial institutions about a new scam involving the temporary theft of chip-based debit cards issued to large corporations. In this scheme, the fraudsters intercept new debit cards in the mail and replace the chips on the cards with chips from old cards. When the unsuspecting business receives and activates the modified card, thieves can start draining funds from the account.
According to an alert sent to banks late last month, the entire scheme goes as follows:
1. Criminals intercept mail sent from a financial institution to large corporations that contain payment cards, targeting debit payment cards with access to large amount of funds.
2. The crooks remove the chip from the debit payment card using a heat source that warms the glue.
3. Criminals replace the chip with an old or invalid chip and repackage the payment card for delivery.
4. Criminals place the stolen chip into an old payment card.
5. The corporation receives the debit payment card without realizing the chip has been replaced.
6. The corporate office activates the debit payment card; however, their payment card is inoperable thanks to the old chip.
7. Criminals use the payment card with the stolen chip for their personal gain once the corporate office activates the card.
The reason the crooks don’t just use the debit cards when intercepting them via the mail is that they need the cards to be activated first, and presumably they lack the privileged information needed to do that. So, they change out the chip and send the card on to the legitimate account holder and then wait for it to be activated.
The Secret Service memo doesn’t specify at what point in the mail process the crooks are intercepting the cards. It could well involve U.S. Postal Service employees (or another delivery service), or perhaps the thieves are somehow gaining access to company mailboxes directly. Either way, this alert shows the extent to which some thieves will go to target high-value customers.
One final note: It seems almost every time I write about the Secret Service in relation to credit card fraud, some readers are mystified why an agency entrusted with protecting the President of the United States is involved at all in these types of investigations. The truth is that safeguarding the nation’s currency supply from counterfeiters was the Secret Service’s original mission when it was first created in 1865. Only after the assassination of President William McKinley — the third sitting president to be assassinated — did that mandate come to include protecting the president and foreign dignitaries.
Incidentally, if you enjoy reading historical non-fiction, I’d highly recommend Candice Millard‘s magnificently researched and written book, Destiny of the Republic, about the life and slow, painful death of President James A. Garfield after he was shot in the back by his lunatic assailant.