Apple’s $200m acquisition wants to shed light on dark data
Apple has reportedly made another sly move into the world of artificial intelligence (AI) through the $200 million acquisition of dark data firm Lattice, reports Telecoms.com (Banking Technology’s sister publication).
The purchase has been brought to light by TechCrunch, and will add a small, but respected team of AI engineers to the ranks of the iLife army. Covert acquisitions like this by Apple are not uncommon, it is after all one of the more secretive tech firms out there, and it’s not 100% sure what the technology will be used for at the moment, but it is certainly an interesting set of capabilities to bring into the fray.
Lattice itself specialises in dealing with what is known as dark data, or for the rest of us, the tidal wave of unstructured data which is circulating. It essentially rationalises unstructured data, for instance that from social media platforms or images, so it is more easily aggregated and used in various functions.
To date, the team has raised $195 million through various investors, most recently in October 2016, with an additional $80 million in funds from Asia Pacific Resources Development Investment and GSR Ventures. It was founded by Christopher Ré, Michael Cafarella, Raphael Hoffmann and Feng Niu who spun out a technology from Stanford University called DeepDive, which was designed to extract value from dark data.
If data is to define the digital economy, companies like Lattice will become increasingly important over the next couple of years. Big data is a term which has been around for some time, though the promise has yet to be realised due to the complications of dealing with such vast amounts of information.
Big data is a commonly used term, though there is a slight misunderstanding as to what it actually is. Many people would assume it is simply the amount of information, this is part of it, but it is a more complicated equation. Big data is defined by the three Vs:
- Volume: the total amount of data which becomes available
- Velocity: the rate at which data becomes available
- Variety: the form in which the data becomes available
The first two can now be dealt with due to the decreasing cost of computing power and data storage, thanks to companies like Amazon Web Services (AWS) and Microsoft and their cloud computing propositions, but the last one is more complicated. Algorithms need to be created to rationalise into a common language for it to be processed in an automated fashion. Due to the first two Vs, the third cannot be completed humans. AI becomes crucial in making big data a reality.
IBM estimates that 80% of the world’s data is currently unstructured, and this problem is only going to snowball as the amount of data which is created increases (2.5 billion GBs of data is created each day), as well as the variety of unstructured data. Automation is a very useful tool, but it is crucial the data is rationalised into a common language for the processes to work to the best of their ability; the success of these technologies will be limited if insight can only be drawn from 20% of the available data.
Moving forward, it is clear the amount and variety of unstructured data is only going to increase. Just look at your Facebook profile; have you noticed your connections are using more GIFs or emojis, or creating more videos and uploading more pictures. This data can tell you a lot about that user, but can your automated systems understand it yet?
Apple doesn’t tend to make a big deal about new acquisitions, especially those in the AI arena, but this could prove to be a very useful tool for Siri or other intelligence-driven applications.