EVENT HIGHLIGHTS

Which way forward for algorithmic management in the Platform Work Directive?

Speakers: Muellensiefen Tobias, Hania Simon, Torvelainen Robert, Kilhoffer Zachary
Moderator: Théo Bourgery-Gonse

On November 8th 2022, PubAffairs Bruxelles organised an afternoon session to discuss the way forward for algorithmic management in the Platform Work Directive and the obligations for digital labour platforms with Mr Tobias Muellensiefen, Legal Officer, Future of Work, DG EMPL, European Commission; Mr Simon Hania, Data Protection Officer, Uber; Mr Robert Torvelainen, Senior Public Policy Manager, Wolt; Mr Zachary Kilhoffer, Doctoral Researcher, School of Information Sciences, University of Illinois Urbana-Champaign.

The event was moderated by Théo Bourgery-Gonse, Journalist at Euractiv.

Théo Bourgery-Gonse introduced the topic of algorithmic management as addressed in the Platform Work Directive and provided a brief description of this legislative initiative. He explained that the directive was put forward in order to clarify the contractual status of platform workers across the EU and to strike a balance between worker protection and digital innovation. Indeed, Bourgery-Gonse explained, algorithmic management processes data to set work tasks and assignments, evaluate worker performance and carry out other relevant functions (such as distributing bonuses and sanctions). Yet, a degree of criticism regarding a lack of transparency and understandability of such processes has become a major issue in the public discussion surrounding digital platform activities. For this reason, European institutions have long debated the Platform Work Directive in order to “ensure fairness, transparency and accountability of algorithmic management in the platform work context”, as per the Commission’s proposal.

The moderator subsequently asked Tobias Müllesiefen what were, in his opinion, the key aspects of the legislative proposal and how the Platform Work Directive would provide transparency, accountability, and fairness when regulating algorithmic management.

Tobias Müllensiefen began by defining the concept of platform work as paid work organised through a platform in which at least three actors are involved, namely, digital platforms, workers and end users. In addition, the speaker specified that this type of work is usually paid by tasks. Müllensiefen also noted that there are several types of platform work spanning several sectors, both online and on-location, although the public discussion has mainly focused on food delivery and ride-hailing services. In this connection, he noted that the European Commission estimated that there are currently 28 million EU citizens performing platform work duties in the EU, of which just 6 million are on-location, all the rest are online. The speaker also stated that the total number of citizens working for digital platforms is estimated to reach 45 million by 2025. Hence, platform work can be defined as a growing phenomenon. Whereas, he added, a recent study commissioned by EU institutions, identified more than 500 digital platforms active in the EU with growing revenues over time.

Before explaining the challenges of algorithmic management, Müllensiefen started by stating that the fundamental feature of algorithmic management applied to work is the fact that giving instructions and making decisions regarding workers is no longer up to a human being, but to an algorithm. In fact, he continued, algorithmic management can be responsible for all kinds of decisions such as workers’ earnings, task assignments, penalties, and bonuses. The speaker added that they similarly exercise a form of supervision by tracking workers’ location, activities and outputs. According to Müllensiefen, algorithmic management does influence working conditions, and, for this reason, critics are highlighting the lack of transparency, lack of human oversight and unfair practices. Moreover, the workers themselves tend to struggle to understand the rationale behind the decision-making processes of these digital platforms.

The speaker also referred to the psychological effects caused by algorithmic management, especially with practices such as gamification and nudging. This practice could result in health consequences. In fact, he explained, these matters are currently hardly regulated in EU labour-related law, and, for this reason, the European Commission has proposed a new legal framework. The speaker also highlighted what the moderator previously stated regarding fairness and accountability in the Platform Work Directive and stressed the fact that another main policy objective of the EU institutions is, indeed, to provide clarity and understanding on platform work for companies and workers alike.

In order to achieve these goals, Müllensiefen said, the directive addresses several questions related to automated monitoring and decision-making systems. Firstly, the legislative proposal aims to fix the lack of transparency by giving workers the right to know what kind of activities are being evaluated, what kind of decisions platforms may take and the main parameters used by the algorithms. Secondly, it addresses the question of data processing. In this regard, he specified, the legislative proposal introduces certain restrictions under which digital platforms cannot process data on the emotional status of workers or when workers are not using the application. Thirdly, the legislation introduces human monitoring on automated decisions regarding the overall effects of algorithmic management. Consequently, workers can be reassured that the impact of automated systems on their working conditions, eventually, will be monitored by human beings.

The speaker then stressed a fourth issue which involves the right to challenge the decisions taken by digital platforms. This provision shall allow workers to request a review of the decision-making process and a discussion with a representative of the digital platform to assess the accuracy and fairness of the given decision. Müllensiefen concluded by stating this last aspect of legislation also entails that workers’ representatives should be informed and consulted if the algorithmic management system is substantially changed. As a result, workers would be able to express their views on any new aspect of their work. In this connection, he said, the legislative proposal aims to improve the working conditions of digital platforms, whilst complementing other platform-related provisions.

The moderator subsequently asked Robert Torvelainen how, in his opinion, algorithmic management works in practice and how this new piece of legislation will impact the day-to-day functioning of Wolt in light of the report on algorithmic transparency released by the company in April.

Robert Torvelainen firstly acknowledged the need for a revision of legislation on platforms workers, as well as the citizens’ expectations on transparency of algorithmic management. For this reason, he continued, Wolt, on its own initiative, assessed the consequences of algorithmic management on the tasks fulfilled by its workers (self-employed couriers and consumers alike), in order to foster understanding and transparency of its own activities. The speaker explained the key elements of Wolt’s delivery services by specifying that the algorithm considers location data, vehicle type and availability to deliver as the only parameters for suggesting tasks to platform workers. He then clarified that, at Wolt, it is only the delivery experience that is rated. He also highlighted that, according to Wolt’s modus operandi, the end customer does not rate individual courier’s performance. In fact, customer rating is given on the delivery experience as a whole, and this includes also how well the restaurant or merchant performed. As there are no courier ratings, these also play no part in the algorithm, he added.

Furthermore, he proceeded, the report on algorithmic transparency released by his company in April showed that the transparency of given company processes is a fundamental element of this discussion and highlighted the need for regulations, as well as explanations on both how the digital platform functions and the outcomes of its algorithmic management. He also highlighted that technology often serves mostly to improve operational efficiency. Torvelainen specified that Wolt’s regular worker surveys (used in the report) demonstrate that more than half of interviewed workers have a good understanding of algorithmic management, whereas just 20% have a poor idea of it. For the speaker the report represents an example of how digital platform work can foster the transparency needed to meet the requests of workers.

The speaker concluded his answer with some concerns regarding the revision of the Platform Work Directive, which may echo similar worries from the platform sector. In this regard, he stated that a lack of clarity may emerge when the systems which support the decision-making are questioned, rather than the decision-making processes themselves. He also highlighted that digital platforms face certain restrictions on data processing that could be also applied in other circumstances where customers’ protection or the safety of both workers and equipment supplied are at stake.

Théo Bourgery-Gonse turned to Simon Hania in order for him to share his thoughts about the possible impact of data processing restrictions on consumers and digital platforms recently proposed by the Czech Presidency, as well as on the possible overlapping between the Platform Work Directive and the GDPR.

Simon Hania began by pointing out certain critical aspects in his capacity as a Data Protection Officer which entails, among other tasks, helping his company to better understand the obligations stemming from the GDPR (the main legislation that regulates data processing to date), monitoring the company’s activities and addressing final users’ and platform workers’ questions and complaints.

The speaker subsequently highlighted his opinion that, within this revision of the Platform Work Directive there are, potentially, some provisions which already exist under the GDPR. Moreover, he stated that under a strict reading of the GDPR, the new legislative proposal would not require interpretations, but it rather gives ex-ante definitions of what data can be processed and how. In this connection, the Platform Work Directive could be defined as a lex specialis which complements and particularises the GDPR, he remarked.

Hania also substantiated that this legislative initiative, in his opinion, could become a somehow uncalled and biased interpretation of the GDPR which would risk not taking into account the rights of all involved parties. Indeed, he continued, data processing, as it stands, risks becoming strictly limited since the workers contract could define what data can be used and how. This particular aspect of the upcoming legislation implies that the Platform Work Directive may hinder the worker data for legal obligations, as well as the rights and legitimate interests of involved parties which risks causing detriment to both final users and workers alike. The speaker subsequently stated that, in this light, a revision of contracts would be required, however the contradictions between the GDPR and the Platform Work Directive would remain. Therefore, he added, the legislative proposal currently discussed should not be biassed in this sense, as the GDPR, and especially article 5, already requires transparency, fairness and accuracy when processing data.

Hania further elaborated on this topic, explaining the consequences when a digital platform cannot collect data on worker’s emotional status in the context of complaints from final users, for instance, regarding an “angry driver”. He highlighted that, under the conditions put in place by the GDPR, digital platforms cannot collect nor infer data on emotional state already, but it is possible to legitimately gather information on the service as experienced by final users. On the contrary, in the Platform Work Directive, it would not be possible to process any emotional data in any circumstance. This aspect of the Platform Workers Directive, according to the speaker, could create several undesired or unintended consequences for other stakeholders.

Nevertheless, he similarly elaborated on how the revision of the Platform Work Directive could positively impact platform transparency. The directive could, in fact, extend the provisions of the GDPR in a useful way. Transparency of algorithmic decision-making processes is already covered under the GDPR, but companies will need to further disclose how the algorithms, that have effects beyond legal or similarly significant effects, work, he noted. The speaker also remarked on the importance of this new provision and agreed on the need to foster worker understanding of the basis on which decisions are taken in order to avoid malicious interpretations of the processes which are not factually true.

The speaker concluded his speech by focusing on the rights of all stakeholders involved in digital platforms and asked for a thorough consideration of this matter which should produce a balanced approach on the outcomes of the new legislative proposal. Hania also added that in order to avoid further confusion, aligning the two legislations is essential.

The moderator moved along and asked Zachary Kilhoffer about the impact assessment companies will have to carry out according to the revision of the Platform Workers Directive and what the possible effects of these new provisions might be. The moderator also asked Kilhoffer to illustrate a comparison between the US and EU legislations on digital platform work.

Zachary Kilhoffer began by explaining the main processes of AI and automated decisions in the world of work through the “six Rs” framework, whereby algorithmic management supervises workers operations by restricting and recommending work tasks. Workers are then evaluated using the recorded data and the ratings of their activities, and face replacement and rewarding actions for disciplining, he added.

The speaker subsequently moved on with the discussion and focused on different opinions regarding impact assessments and their role when it comes to AI fairness and biases. Within this context, he stated that risk and impact assessments are two approaches pointed out by several scholars as having potential to evaluate the effects that AI has on workers. He added that the revision of the Platform Work Directive, in this sense, encompasses both. Whenever algorithmic management is involved, the speaker continued, regular impact assessments must be carried out by digital platforms in order to analyse the risks and consequences of automated decisions and their effects on workers and consumers alike.

However, Kilhoffer questioned some of the practices that could hamper such efforts. For instance, regarding the matter of how the Platform Work Directive would establish criteria to avoid self-governance actions such as impact assessments becoming a meaningless exercise and how the obligations contained in the legislative proposal would require digital platforms to oversee worker interactions, while complying with due diligence. Indeed, he stated, impact assessments could guarantee more accountability and transparency, helping consumers and workers alike. Although, the latter would probably benefit the most as they are the most vulnerable party within this relationship.

Kilhoffer concluded his speech by highlighting that a legislative proposal is currently under discussion in the US Congress, namely the 2022 Algorithmic Accountability Act, which foresees impact assessments of algorithmic management for digital platforms with an annual gross receipts above 50 million dollars. Although this legislative initiative could be useful to better understand the requirements companies may have to fulfil, to date, there are no practical guidelines nor clear patterns to follow when digital platforms use algorithmic management to assess the impact of algorithms, he concluded.

Théo Bourgery-Gonse took the discussion further on the obligations and practical aspects of impact assessment in the Platform Work Directive. Subsequently, the moderator gave Tobias Müllensiefen the floor, asking him to explain how digital labour platforms should “regularly monitor and evaluate the impact of individual decisions taken or supported by an automated decision-making system on working conditions”, as stated in article 7 of the Platform Work Directive and how this aspect of the upcoming legislation would be enforced.

Tobias Müllensiefen firstly clarified that the EU Platform Workers Directive does not require a general ex-ante impact assessment. A similar feature figures in another proposed piece of legislation, namely the AI Act, which will require high-risk AI systems to be subject to risk management systems as regards risks to fundamental rights or health and safety.

The speaker also specified that this very piece of legislation focuses more on the rights and remedies for workers regarding the impact of algorithmic management systems on general working conditions, such as minimum wage, rest periods, and equal treatment.

Regarding article 7 of the Platform Work Directive, he added that, in paragraph 1, the automated monitoring and decision-making systems of platforms are monitored in real time in order for platforms to be compliant with workers under EU member states’ labour law. Whereas, with regard to article 7, paragraph 2, the question of assessment is more linked to health and safety, so it would entail an ex-ante risk assessment approach.

The moderator shifted the attention to a different topic: the dialogue between platform workers and digital platforms. He asked Simon Hania and Robert Torvelainen how platforms plan to foster efficient and timely information, as well as consultation, with workers and workers’ representatives while processing their data.

Robert Torvelainen took the floor and explained that one of the main challenges of algorithmic management transparency and accountability is, indeed, about finding valid guidance. He highlighted that the UK government recently provided some guidelines to public authorities on algorithmic transparency and that the cities of Helsinki and Amsterdam have already disclosed the functioning of their algorithms regarding some aspects of their operations. Although state owned companies and private entities are different institutions, the speaker continued, these kinds of provisions can help digital platforms to find guidance and ensure constructive communications and discussions with both workers and workers’ representatives.

Torvelainen also said that Wolt’s onboarding process with new partners already includes clarification of how the Wolt platform operates. According to the speaker, the Platform Work Directive shall guarantee further transparency without creating excessive burden also because the directive, when in place, will take some time to be transposed in national legislation, hence companies will have some time to adapt. Torvelainen concluded by remarking that the willingness of digital platforms to provide the information workers need to better understand their tasks, as well as the evaluation of their performances, or a lack thereof, as not doing so would be counterproductive.

Simon Hania firstly acknowledged that several requirements of the revision of the Platform Work Directive covering transparency and accountability through regular impact assessment are actually covered under the GDPR. In this context, he said, under a robust reading of the GDPR, a data protection impact assessment is already prescribed in several domains, workers’ rights included. Indeed, one of the elements of data protection assessment (an element of the GDPR), refers to consultation with all possible stakeholders, not just with workers, but with consumers and economic operators alike, he added. The speaker also said that the revised version of the Platform Work Directive would require digital platforms to carry out impact assessments and improve communications with workers.

Hania subsequently moved on to the aspects concerning data processing and monitoring, which, he again noted, are already covered by GDPR. Data protection officers are obliged to constantly oversee data collection, allocation, and use. In conclusion, according to the speaker, these provisions of the Platform Work Directive tie labour norms to data law more closely.

Théo Bourgery-Gonse asked Zachary Kilhoffer for some considerations on Simon Hania’s remarks.

Zachary Kilhoffer partly agreed with some of the aspects pointed out by his co-panellist, however, he also argued that even a thorough data protection impact assessments would not accomplish the aim of the Platform Work Directive. They have different goals. The latter focuses on human rights or, more specifically, on the possible adverse outcomes of algorithmic management that may affect workers’ rights and well-being. According to the speaker, the GDPR (specifically Article 22) already includes certain provisions which cover automated decision processes, but, given certain caveats, digital platforms can circumvent such obligations, while most certainly upholding the principle of human supervision. In this connection, the speaker stated the rationale, and the scope of the new directive is larger than that of GDPR.

In support of his statement, the speaker mentioned a court case, in which some UK Uber drivers were deactivated from the platform and banned from driving for the company as a result of a fraud accusation. The company did not provide sufficient explanation to these drivers and, consequently, the workers had to challenge Uber in court. In addition, he remarked that, according to GDPR provisions, the company was actually not obliged to disclose information which could have helped the workers’ case. Kilhoffer ultimately agreed with Hania that some provisions of the Platform Work Directive are tackling issues already covered by the GDPR, but he reiterated that the latter piece of legislation has not primarily been put forward to protect workers’ rights.

Given the past issues between digital platforms and workers, the moderator asked Zachary Kilhoffer whether the Platform Work Directive will improve some of the aspects also covered by the GDPR and ultimately protect workers’ rights in a broader context.

According to Kilhoffer, the current revision of the Platform Work Directive may improve workers’ rights in many aspects as the obligations towards digital platforms could be effective. However, he added, it would be premature to make such a statement when the legislation is not final.

As an example, he continued, the provisions regarding risk assessment obligations may pose certain challenges at a member state level, since the implementation of the Platform Work Directive may have specific or different interpretations and adaptations in different legal frameworks. Indeed, the number of important details of this legislative initiative could be challenging when applied in different national legislations, especially since the Platform Work Directive aims at fostering human oversight and human interactions in digital platform’s decisions which affect workers.

The speaker reiterated that the legislative proposal is poised to ensure more rights and protection to workers, so it is a step in the right direction. Yet, as with all legislation, contention could emerge between workers and digital platforms or court cases regarding unclear decisions, particularly, given the ambition of this new law. Another positive aspect of this legislative initiative, the speaker said, is the flexibility within the provisions enacted. Digital platform working is constantly changing, and EU institutions have tried to strike a balance between the current state of play of platform work and its probable changes in the future.

The moderator gave the floor to Simon Hania in order to respond to some of the statements made by Zachary Kilhoffer.

Hania started by arguing that absence of certain automated processes allegedly enacted by digital platforms is very hard to prove in court. There still is often a degree of interpretation and lack of clarity on the obligations stemming from the GDPR. In addition, he stated, ex-ante risk assessments should not be confused with ex-post explanations given after certain decisions have been taken. In the above-mentioned Dutch court case, for example, the allegations against platforms were concerning the lack of explanation, human interactions, and oversight, which is not applicable in that specific case, he continued. Indeed, the speaker explained, the obligations of the GDPR already require companies to undertake ex-ante assessments before data processing is in place, while fundamental rights must be taken in high consideration. In fact, platforms must monitor and review data processing on top of an ex-ante risk assessment, the speaker said.

Hania also stated that a correct implementation of the GDPR requires time and legal supervision and remarked that digital platforms have already changed certain operational aspects which were not ensuring workers’ rights. In this regard, the speaker mentioned the case of the Italian data protection authority, which investigated some digital platforms and based on existing law, concluded that some of the measures taken were not fully in accordance with fundamental principles, such as fairness, accuracy and transparency. In summary, Hania concluded, GDPR provisions have already been applied by supervisory authorities to ensure workers’ rights.

The moderator finally asked all the speakers whether there should be further changes in the Platform Work Directive to ensure more fairness, transparency, and effectiveness.

Robert Torvelainen took the floor and explained how Wolt has already taken some of the obligations contained in the revision of the Platform Work Directive very seriously, particularly when these affect self-employed workers. Within this context, the speaker said that algorithmic management is already challenging, especially when applied to those who are self-employed. According to the speaker, the criteria defining what kind of automated decisions digital platforms can take with those who are self-employed must be further clarified. Torvelainen concluded his answer by highlighting the fact that the provisions contained in the Platform Work Directive defining whether a worker is employed or self-employed must further clarify what automated decisions digital platforms can enforce within the different groups of workers.

Subsequently, Simon Hania stressed the need for a revision of the Platform Work Directive in which the rights and freedoms of all actors involved are taken into consideration in order to be reconciled with workers’ rights, also in accordance with the GDPR. The speaker also pointed out that the enforcement of the Platform Work Directive’s algorithmic management provisions should be overseen by data protection authorities in cooperation with labour authorities, and not by the latter only. Furthermore, the speaker expressed concerns surrounding the need for broader cooperation between different national data protection authorities regarding norms that encompass both national and supranational legislation.

Zachary Kilhoffer focused on other aspects of the issues at stake. Indeed, he suggested more legislative guidance on how digital platforms should act, adding that further clarity on the obligations that regard decision-making processes is needed. Moreover, the speaker remarked that impact assessments should be ensured with clearer provisions, such as model templates of risks assessment. In fact, according to the speaker, concrete guidance explaining, for example, which algorithms’ functioning digital platforms should disclose, would be beneficial to workers and companies alike.

Tobias Müllensiefen concluded the round of discussion by acknowledging certain points made by the other speakers, with special regard to guidance.

The remaining part of the debate and the Q&A session covered the following issues: best practices in platform work; the question of  the dialogue with platform workers; the question of reforming labour law beyond specific sectors; the question of workers’ wages based on time and not only on tasks; the matters of flexibility and freelancing with regard to potential limitations stemming from EU regulations; the avoidance of legislative divergence in the EU digital single market due to the revision of the Platform Work Directive; how the public debate on platform workers has been conducted so far; the risks of decision-making systems to avoid undue pressure; the role of national and local public authorities in guiding platforms and inform workers; the need for all stakeholders, researchers and policymakers involved to work together on the best practices in order to ensure further benefits of the EU platform economy.

Do you wish to know more about the issues discussed in this debate? Then check out the selected sources provided below!

A Europe fit for the digital age, Empowering people with a new generation of technologies, European Commission

Commission proposals to improve the working conditions of people working through digital labour platforms, European Commission

Initiative to improve the working conditions of people working in the platform economy, European Parliament

Digital workers: better working conditions and protection of rights, press release, European Parliament

EU rules on platform work, Council of the European Union

Programme of the Swedish Presidency, Swedish Presidency of the European Union

Proposal for a Regulation laying down harmonised rules on artificial intelligence (AI Act), European Commission

Digitalisation and changes in the world of work, European Parliament Study

Improving the working conditions of platform workers, European Parliamentary Research Service

Artificial intelligence and digital tools in workplace management and evaluation – An assessment of the EU’s legal framework, European Parliamentary Research Service

Algorithmic Accountability Act of 2022, US Congress

Algorithmic Justice and Online Platform Transparency Act, US Congress

Algorithmic Transparency Report 2022, Wolt

The Algorithmic Management of work and its implications in different contexts, International Labour Organisation

Platform Economy, European Trade Union Confederation

Regulating algorithmic management. An assessment of the EC’s draft Directive on improving working conditions in platform work, European Union Trade Institute

The Future of Work, Reports and Data, OECD

Rewiring the Firm: Algorithmic management and the future of work, OECD Forum

AI has made its way to the workplace. So how have laws kept pace?, OECD Policy Observatory

Kellogg, K. C., Valentine, M. A., & Christin, A. (2020). Algorithms at work: The new contested terrain of control. Academy of Management Annals, 14(1), Article 1. 

Sharing economy, Euractiv