At the ShellyPalmer Innovation Series Breakfast at CES 2018, I had a Socratic discussion about the influence of the big technology platforms and other emerging technologies on our lives and the need for responsible innovation with David Sapin, US Risk & Regulatory Leader, PwC. We also talked about the growing “techlash” buzz for more industry regulation and, while we agreed that there was a need for formal approach around some aspects of the industry, we felt that the best approach at the time might be an industry self-regulatory approach to responsible innovation (see A Case for Responsible Innovation).
Our perspective was that the industry was in the best position to develop and enforce responsible innovation standards that would address growing concerns from the public and policy makers about the influence of big tech and emerging tech but would not unnecessarily restrain the innovation that has been the trademark of the industry. We received quite a bit of positive feedback and people were interested in what a self-regulatory approach would look like and whether it might just work. Then the Facebook/Cambridge Analytica news broke.
David and I, along with our friend and colleague Rob Mesirow, Principal, PwC Connected Solutions, have spent some quality time discussing Big Tech regulation — and our thinking has evolved.
2018: The pendulum starts to swing for big tech regulation
The breaking of the Facebook/Cambridge Analytica story in March 2018 may be the moment where we all realized that some level of big tech platform regulation is needed. The story (and its aftermath of hearings etc.) set in motion a year that has brought even more attention to the influence of technology and the big tech platforms on our lives and new calls to think about how or whether we “regulate” the industry. Over the rest of 2018 we saw the regulatory approach move into full swing on the privacy front. While not a reaction to the Facebook situation, the EU’s General Data Protection Regulation (GDPR) went into effect in May and then California passed its own privacy legislation in June, the California Consumer Privacy Act (CCPA). We have also seen large global tech companies and others implement new policies, procedures and controls to address growing concerns of consumers and policy makers alike on issues ranging from location tracking, facial recognition to hate speech. Finally, towards the end of the year we began to see a divergence within the industry, as some of the big tech players – such as Apple – calling for regulation, while others are less enthusiastic.
2019: From privacy laws to concerns about geolocation and artificial intelligence…what’s next?
So, where do we go from here on some of the key big tech regulatory issues? The privacy regulatory train has clearly left the station. GDPR has set the standard and most global companies have already modified their data policies, procedures and practices to ensure compliance – at least with regard to their EU customers and employees’ data. The passing of the CCPA will extend the scope of customers covered by comprehensive privacy legislation and may trigger similar legislative action by other states, which would further increase the calls for federal privacy legislation in the US to have a consistent privacy legal framework.
We are often reminded that our apps are free because “we are the product” and perhaps as long as the only result is some targeted ads or a better understanding of retail traffic patterns there is no need for concern or regulatory attention. Privacy legislation such as GDPR and CCPA should provide individuals with more transparency – and more choice – about how their data is used. Anonymized data, however, is not subject to the privacy protections provided under GDPR and CCPA. As more and more data is being collected to power advances in the use of artificial intelligence (AI) and machine learning (see PwC AI predictions), questions are being asked about whether data in the presence of so much other data can ever truly be anonymous. If that is true, will consumers really have the protections they think they do under privacy laws? Will we start to see increasing pressure from policymakers to further limit the amount and types of consumer data that are being collected and how that data is being used? It would seem to be the next step in the evolution of data protection and ethical data use.
The one question we often get is whether we think we will see some regulation of the big tech platforms in 2019. While there is “buzz” in Washington calling for some level of regulation, it will likely be difficult to build a congressional consensus on what big tech regulation should look like. As with privacy, the states may once again step in (as California did with net neutrality) to address specific concerns. The biggest pressure placed on the tech platforms will likely come from their users who will demand changes to their policies. For now, the “do it yourself” approach to addressing these issues will continue to be the approach of choice. There has also been some talk of using an antitrust argument to “break up” some of the biggest companies, but for the most part the big tech companies continue to provide their customers with more convenience at a lower cost and policy makers will be hesitant to interfere with that business to customer relationship.
The new normal for the big tech regulatory environment
While they may not face the spectre of an onerous regulatory environment in the near future, the big tech companies do understand the policy landscape has changed. They have to dig into their business processes, find the root cause of their problems, fix them, and demonstrate their change to (re)build public trust. In the current environment, these companies have started moving away from the old position of self-regulation to actively seeking to work with lawmakers in Europe and the US.
Similarly, companies deploying emerging technologies such as geolocation, facial recognition and AI, need to understand that their business model is changing. Traditional brick and mortar companies are now becoming data companies and need to build in a different type of analysis into their product development process and their strategic decision making. They need to think ahead about how these new technologies might do harm to their customers and customer relationships and address those concerns in the product design process.
We may not quite be at the dawn of a new regulatory era for big technology, but times have definitely changed. It is more important than ever for companies to both lead from the front on responsible innovation and build consensus with policy makers on the appropriate path forward. If companies do not find the right balance in that approach, there may be no choice but for policy makers to step in and take the lead. That may not be the best answer for big tech – or for us.
Author’s note: David Sapin, Principal US Advisory Risk, PwC contributed to this article.