Research Merchants

Risk Engines Verify Credit Card Customers

Risk EngineWhat is a risk engine? In technical terms (and for the purposes of credit card processing) it is an API that uses a combination of proprietary and public data, aggregated into a single database, that is then used to give a score like a "credit score" that can either be applied to individual merchants as well as credit card transactions. While there is a great potential for abuse of data, especially in the wake of vagaries related to Operation Choke Point, the potential to screen out fraudulent transactions and merchant services applications is significant. Even for small ISOs and processors, there are a lot of attempts to create phony accounts which experience tens of thousands of dollars in fraud and get shut down, leaving ISOs and acquirers on the hook for stolen card information.

One product in the marketplace today is known as Veda and has been developed by the payments startup WePay. It uses multiple (perhaps several hundred) data points to determine if credit card transactions are on the up-and-up. The tool makes its money by charging a fee on every transaction, and since many companies can already figure out how much fraud is costing divided by the number of transactions, there is a cost-benefit and ROI case that only takes a couple of minutes to build. In the wake of data breaches with the presentation of counterfeit cards, the use of any tool that can verify transactions instantaneously not only stops a bad purchase but also acts as a deterrent. Some of the data points include contact information, payment histories, and products ordered. Adding several layers of metadata into this equation, the age of a person's Facebook account, number of friends, company reviews, credit data information, and even social media behavior all go into determining trust. If you were "born yesterday" then the lack of data also stands as a problem.

Big Data Tools Quantify Risk

To understand how these engines work, consider the amount of information that they take in when making an assessment. For a new credit card processing customer, a tool may query all social media engines, old business records, public databases, YouTube, tweets, and even old newspapers which have recently been digitized. Ideally, the information would be used to see if you are who you say you are, in order to prevent fraudulent applications and minimize the risk of account abuse. However, what if the engine looks into your political affiliation, group memberships, and hobbies. While an insurance company may have a right to know if you are a mountain climber or trapshooter, does a credit card processor (or acquirer) have the right to deny you coverage because you belong to a gun club. Data points could be manipulated in a way that violates regulations, or scored in a manner that effectively discriminates against people even though there is no paper trail.

Privacy Advocates and Entrepreneurs Fear Risk Engine Data

The nature of data collected and assessed by risk engines has many businesses concerned. In many instances, people are getting labeled as "high risk" because of their professions or associations made on Facebook, Twitter, and any number of other platforms and databases that may not be available for public consumption. In what essentially serves as a background check on your company and its principals, engines may use complex algorithms to not only check out who you are, but with whom you are associated. Considering that the FDIC and Consumer Financial Protection Bureau have already defined high risk categories, the next step is determining who may be (or may have ever been) associated with an enterprise that is looked upon in an unfavorable manner. The danger lies in the fact that there are often legitimate services and products sold in an environment that is considered to be risky, or has a high number of companies that are disreputable and may be operating illegally. (Just ask the masseuse who spent a lot of time going to school, who finds out that customers have a very different expectation, and is associated with illicit activities by banks and credit card processors.)

Because of risk categories defined under Operation Choke Point, there are already many lines of business that are losing the ability to process credit cards. Private individuals are also losing bank accounts because of the industries in which they operate. There are even people who are unable to take donations on crowdfunded websites despite having a legitimate need, and this is usually based on data that has come out of a risk assessment program and/or social media activity. Automated risk algorithms can also register many false positive, and when you consider that even the billion-dollar search engines are continuously refining their processes, it could be very easy (and quite Kafkaesque) to trip up ordinary people based on data points that may never become public. False associations, as well as the fallout from identity theft and fraud, can make it hard for people to get services because of bad data. This even happens out in the real world, where people whose identities had been stolen by criminals can get arrested because police officers end up flagging the "alias" used by a crook.