Back

Explore Courses Blog Tutorials Interview Questions
+2 votes
5 views
in Machine Learning by (4.2k points)

I have the following problem and was thinking I could use machine learning but I'm not completely certain it will work for my use case.

I have a data set of around a hundred million records containing customer data including names, addresses, emails, phones, etc and would like to find a way to clean this customer data and identify possible duplicates in the data set.

Most of the data has been manually entered using an external system with no validation so a lot of our customers have ended up with more than one profile in our DB, sometimes with different data in each record.

For Instance, We might have 5 different entries for a customer John Doe, each with different contact details.

We also have the case where multiple records that represent different customers match on key fields like email. For instance, when a customer doesn't have an email address but the data entry system requires it our consultants will use a random email address, resulting in many different customer profiles using the same email address, same applies for phones, addresses, etc.

All of our data is indexed in Elasticsearch and stored in a SQL Server Database. My first thought was to use Mahout as a machine learning platform (since this is a Java shop) and maybe use H-base to store our data (just because it fits with the Hadoop Ecosystem, not sure if it will be of any real value), but the more I read about it the more confused I am as to how it would work in my case, for starters I'm not sure what kind of algorithm I could use since I'm not sure where this problem falls into, can I use a Clustering algorithm or a Classification algorithm? and of course, certain rules will have to be used as to what constitutes a profile's uniqueness, i.e what fields.

The idea is to have this deployed initially as a Customer Profile de-duplicator service of sorts that our data entry systems can use to validate and detect possible duplicates when entering a new customer profile and in the future perhaps develop this into an analytics platform to gather insight about our customers.

Any feedback will be greatly appreciated :)

Thanks.

1 Answer

+2 votes
by (6.8k points)

There has actually been a lot of research on this, and people have used many different kinds of machine learning algorithms for this. I've in person tried genetic programming, which worked reasonably well, however in person I still value more highly to tune matching manually.

I have a few references for research papers on this subject. StackOverflow doesn't want too many links, but here is bibliographic info that should be sufficient using Google:

Unsupervised Learning of Link Discovery Configuration, Andriy Nikolov, Mathieu d’Aquin, Enrico Motta

A Machine Learning Approach for Instance Matching Based on Similarity Metrics, Shu Rong1, Xing Niu1, Evan Wei Xiang2, Haofen Wang1, Qiang Yang2, and Yong Yu1

Learning Blocking Schemes for Record Linkage, Matthew Michelson and Craig A. Knoblock

Learning Linkage Rules victimization Genetic Programming, Robert Isele, and Christian Bizer

That's all research, though. If you are looking for a sensible answer to your drawback I've designed an open-source engine for this kind of deduplication, called Duke. It indexes the information with Lucene, and then searches for matches before doing a more detailed comparison. It needs manual setup, though there's a script that may use genetic programming (see link above) to make a setup for you. There's also a guy who wants to make an ElasticSearch plugin for Duke (see thread), but nothing's done so far.

Anyway, that's the approach I'd take in your case.

Also, if you are looking to learn all the concepts of Machine Learning, then you can join Machine learning Certification course.

Related questions

0 votes
1 answer
0 votes
1 answer
asked Dec 27, 2020 in SQL by Appu (6.1k points)
0 votes
1 answer

Browse Categories

...