Articles
Our team loves to share what we learn, know and do with the wider community, in order to promote the exchange of deep scientific and hands-on technical knowledge.
In this new article we bring back the subject of Data Clean Rooms (DCR), which are becoming more and more of a topical challenge due to the regulatory changes regarding data that are coming in Europe and, eventually, throughout the world. In case you missed it, I encourage you to give our original article a quick read here.
It’s not an understatement to say that data has become a crucial part of marketing & media planning, fully integrated into the current workflows. However, new privacy regulations around personal data and its use are coming (or are already here!), and many brands and marketeers are wondering how they can still understand their audiences and stay relevant while also being privacy compliant.
In our previous article in this series we presented two techniques to improve the optimisation of a neural network, both based on adaptive techniques: adapting the learning rate α and adapting the gradient descent formula by including new terms in the equation.
AUG 1 — 2022 Cash & Carry adoption of data science is growing momentum The first book on wholesaling — Wholesaling Principles and Practice (1937) by Beckman and Engle states that “During the era of rapid change in the field of wholesaling which began in the middle of the twenties, the cash and carry wholesale […]
Unless your head has been buried in the sand for the last few years you already know that third-party cookies and ID deprecation is coming at us quickly. A tighter privacy framework and growing Walled Gardens have us all scratching our heads for technical solutions that allow us to develop data-driven marketing strategies and top-notch Customer Journeys.
Have you ever seen or participated in a project where a great tool was developed and then no one ever used it? It’s a story that is repeated time and time again: when a project is delivered and adoption is not as high as would have been desired. How is it possible if the solution works perfectly and fully meets the specifications?
More than 2.5 quintillion bytes of data were generated daily in 2020 (source: GS Analytics). To put this in perspective, a quintillion is a million million million or, more simply, a 1 followed by 18 zeros. It is therefore not surprising that a significant amount of this data is subject to errors.
You can argue about how much the pandemic had to do with the increasing pace at which Artificial Intelligence (AI) was adopted throughout 2021, but what you cannot argue with is that Covid has pushed leaders to accelerate research and work in this field.
In the first part of this article, we took a look at the Transformer model and its main use cases related to NLP, which have hopefully broadened your understanding of the topic at hand. If you have not read it yet, I suggest you give it a brief glance first since it will help you understand its current standing.
If you have dug deep into machine learning algorithms, you will probably have heard of terms such as neural networks or natural language processing (NLP). Regarding the latter, a powerful model architecture has appeared in the last few years that has disrupted the text mining industry: The Transformer.
Dramatic increases in computing power have led to a surge of Artificial Intelligence applications with immense potential in industries as diverse as health, logistics, energy, travel and sports.
In the first part of this article we discussed the potential harm and risks of some Artificial Intelligence applications that have demonstrated immense potential across many industries.
An Artificial Neural Network (ANN) is a statistical learning algorithm that is framed in the context of supervised learning and Artificial Intelligence.
n this second part of the previous introductory article, we’ll tackle the more in-depth description of data collection, object modelling, and data analysis at CMS.
Are you interested in learning more about particle physics? You might have heard terms like neutrinos, quarks, or dark matter mentioned before and want to know more about them.
In response to the most atypical year that many of us may have probably lived, leading companies have wisely reconsidered where to place their money.
While overall adoption of artificial intelligence (AI) remains low among businesses, every senior executive that I discuss the topic with claims that AI isn’t hype, although they confirm that they feel uncertain about where these disciplines may provide the largest rewards.
This isn’t a highly technical article explaining the maths of Omitted Variable Bias (OVB), there are plenty of brave individuals who have already taken this approach, and theirs can be read here (1) or here (2). Instead, this is an article discussing what OVB is in plain English and its implications for the world of marketing and data.
In this article I attempt to provide an easy-to-digest explanation of what Deep Learning is and how it works, starting with an overview of the enabling technology; Artificial Neural Networks.
The interest in esports is growing and this widespread phenomenon massively attracts millions of individuals around the globe every day. What started out as a niche activity has now turned into shows that bring thousands of people together in stadiums and pavilions.