DSA Transparency report – February 2025

Name of the service provider

Aylo Social Ltd

Date of the publication of the report

28 February 2025

Service

UVIU

Reporting period

The following report covers the reporting period of 17 February 2024 – 31 December 2024

Orders from authorities (Art. 15(1)(a) DSA)

The below table represents the number of orders by law enforcement for immediate removal, per country and type

Country Total Orders
Austria -
Belgium -
Bulgaria -
Croatia -
Cyprus -
Czech Republic (Czechia) -
Denmark -
Estonia -
Finland -
France -
Germany -
Greece -
Hungary -
Ireland -
Italy -
Latvia -
Lithuania -
Luxembourg -
Malta -
Netherlands -
Poland -
Portugal -
Romania -
Slovakia -
Slovenia -
Spain -
Sweden -
Totals 0

 

To confirm, we have not received any removal orders from law enforcement.

The table below indicates the number of information requests from law enforcement relating to individuals/users per country and type.

Country Total Number of Requests
Austria -
Belgium -
Bulgaria -
Croatia -
Cyprus -
Czech Republic (Czechia) -
Denmark -
Estonia -
Finland -
France -
Germany -
Greece -
Hungary -
Ireland -
Italy -
Latvia -
Lithuania -
Luxembourg -
Malta -
Netherlands -
Poland -
Portugal -
Romania -
Slovakia -
Slovenia -
Spain -
Sweden -
Totals 0

 

To confirm, we have not received any information requests from law enforcement.


User notices (Art. 15(1)(b) DSA)

Note that the figures provided in this section are for the total number of notices received. A notice may list one or several pieces of content, and one piece of content could be flagged several times.

Content reported by users

The table below indicates the number of user notices submitted by users through all available notification channels on UVIU, including content removal requests (CRRs) and content flags.

Type of potential violation Total
Potential Child Sexual Abuse Material 6
Non-Consensual Content 22
Illegal or Harmful Speech 2
Content in violation of the platform's terms and conditions 3,954
Intellectual property infringements 80
Total 4,064

 

DSA Trusted Flaggers

We did not receive any removal requests from DSA Trusted Flaggers during the reporting period.

Actions taken on user reports

The table below indicates the number of pieces of content removed on the basis of user notices.

Reason for Removal Total
Content in violation of the platform's terms and conditions 41
Non-Consensual Behavior 33
Potential Child Sexual Abuse Material 2
Withdrawal of consent 130
Total 206

 

Notices processed by automated means

All notices are processed by our human moderation team, and we do not utilize automated measures for any requests. Note that content is immediately suspended from public view when reported via our content removal request form, prior to human review, provided that the submitter has validated their email address. If after diligent human review, no illegality or incompatibility with our terms and conditions should be confirmed, the content is reinstated.

Median resolution time 

Reporting source Time
Median Time - Content removal request form  9.5 days
Median Time - Content flags  0.25 days
Median Time - Copyright infringement form 1.15 days

 

Content moderation (Art. 15(1)(c) DSA) & Automated content moderation (Art. 15(1)(e) DSA)

We use a combination of automated tools, artificial intelligence, and human review to help protect our community from illegal content. While all content available on the platform is reviewed by human moderators prior to publishing, we also have additional layers of moderation which audit material on our live platform for any potential violations of our Terms of Service.

The accuracy of content moderation is largely unaffected by Member State language due to our extensive use of automated tools and human moderation. Internal statistics show no significant differences between languages. Offenses are largely language independent.

Automated tools are used to help inform human moderators in making a manual decision. When an applicable automated tool detects a match between an uploaded piece of content to one in a hash list of previously identified illegal material, and that match is confirmed, the content is removed prior to reaching a moderator. All metadata is scanned against our Banned Word Service prior to reaching moderators

Training and support given to content moderation HR

All moderators receive extensive training over a 3-month period that involves theoretical and practical exercises, job shadowing, and a final exam that requires a perfect score to pass. Once the fundamentals of the compliance guidelines are confirmed the moderators are then supervised on all their review for a period of time. Any moderation errors are addressed and corrected to ensure consistent application of the guidelines.

We use two different virtual care platforms (North America & Europe) that give moderators access to a variety of health and wellness professionals. We also use an additional program which provides moderators with further, complementary support and tailored wellness programs consisting of fitness/nutrition/life coaches, counsellors, and medical professionals.

Automated Tools

UVIU’s content moderation process includes an extensive team of human moderators dedicated to reviewing every single upload before it is published, a thorough system for flagging, reviewing, and removing illegal material, parental controls, and the utilization of a variety of automated detection technologies for known and previously identified, or potentially inappropriate content. Specifically:

Hash-list tools – known illegal material

We use a variety of tools that scan incoming images and videos against hash-lists provided by NGOs. If there is a match, then content is blocked before publication.

  • CSAI Match: YouTube’s proprietary technology for combating Child Sexual Abuse Imagery online.
  • PhotoDNAMicrosoft’s technology that aids in finding and removing known images of child exploitation.
  • Safer: In November 2020, we became the first adult content platform to partner with Thorn, allowing us to begin using its Safer product on our platforms, adding an additional layer of protection in our robust compliance and content moderation process. Safer joins the list of technologies that our platforms utilize to help protect visitors from unwanted or illegal material.
  • Instant Image Identifier: The Centre for Expertise on Online Sexual Child Abuse (Offlimits) tool, commissioned by the European Commission, detects known child abuse imagery using a triple verified database.
  • NCMEC Hash Sharing: NCMEC’s database of known CSAM hashes, including hashes submitted by individuals who fingerprinted their own underage content via NCMEC’s Take It Down service.
  • StopNCII.org: A global initiative (developed by Meta & SWGfL) that prevents the spread of non-consensual intimate images (NCII) online. If any adult (18+) is concerned about their intimate images (or videos) being shared online without consent, they can create a digital fingerprint of their own material and prevent it from being shared across participating platforms.
  • Internet Watch Foundation (IWF) Hash List: IWF’s database of known CSAM, sourced from hotline reports and the UK Home Office’s Child Abuse Image Database.

AI tools – unknown illegal material

We utilise several tools that use AI to estimate the ages of performers. The output from these tools assists content moderators in their decision allow publication of uploaded content. Specifically:

  • Google Content Safety API: Google's artificial intelligence tool that helps detect illegal imagery.
  • Age Estimation: We also utilize age estimation capabilities to analyze content uploaded to our platform using a combination of internal proprietary software and external technology, provided by AWS and PrivateID to strengthen the varying methods we use to prevent the upload and publication of potential or actual CSAM.

Fingerprinting tools

In addition to hashes received from NGOs, we also use fingerprint databases to prevent previously prohibited material from being re-uploaded. Images and videos removed during the moderation process, or subsequently removed post publication are fingerprinted using the following tools to prevent re-publication. Content may also be proactively fingerprinted with these tools.

  • Safeguard: Safeguard is Aylo’s proprietary image recognition technology designed with the purpose of combatting both child sexual abuse imagery and non-consensual content, by preventing the re-uploading of previously fingerprinted content to our platform.
  • MediaWiseVobile’s fingerprinting software that scans any new uploads for potential matches to unauthorized materials to protect previously fingerprinted  videos from being uploaded/re-uploaded to the platform.

 Moderation / Compliance Content Upload Process

The below chart shows our moderation/compliance process from account creation to publication.

Content upload process.png



Accuracy & Safeguards

Whilst automated tools assist in screening for, and detecting illegal material, uploaded images and videos cannot be published without being reviewed and approved by our trained staff of moderators. This acts as a quality control mechanism and safeguard for the automated systems.

Video removals from internal moderation

The table below provides the number of videos removed* on the basis of proactive voluntary measures (internal moderation, internal tools, internal audit), broken down by type of removal and total.

Reason for Removal Total
Content in violation of the platform's terms and conditions 898
Non-Consensual Behavior 82
Potential Child Sexual Abuse Material 105
Animal Welfare       4
Bodily Harm/Violence       6
Illegal or Harmful Speech 1     
Total 1,096

* Removals in this section may include content already removed in a previous period and reclassified to a different reason code during this reporting because of internal auditing.

Manual vs automated removals from internal moderation

The table below indicates the pieces of content removed by internal means, broken down by automated (tools) and manual (internal moderation, internal audit). Automated decisions are where an exact binary match is achieved through one of our hashing-tools against known illegal material. Manual decisions are where a human has made a decision with or without the help of assisting tools.

Type of Content Total
Videos - Automated 75
Videos - Manual 1,024
Total                              1,096

 

User restrictions

The table below indicates the number of users banned based on the source of removal.

Reason for Removal Total
Animal Welfare                         1
Content in violation of the platform's terms and conditions           92
Goods/services not permitted to be offered on the platform                   16
Non-consensual image sharing                   16
Potential Child Sexual Abuse Material*              13
Total           138

 

Complaints received against decisions (Art. 15(1)(d) DSA)

The table below shows the number of appeals from users against decisions to remove their content or to impose restrictions against their account. Appeals include requests for additional information about the corresponding removal or restriction.

Appeals - Account Restrictions Number of Appeals
Total Account Appeals 10
Decision Upheld 10

The median time to resolve these complaints was 2.7 days

 

Appeals - Content Removals Number of Appeals
Total Content Appeals 3
Decision Upheld 3

The median time to resolve these complaints was 4.6 days

Out-of-court dispute settlement (Art. 24(1)(a) DSA)

To our knowledge, no disputes have been submitted to out-of-court settlement bodies during the reporting period.

Suspensions for misuse (Art. 24(1)(b) DSA)

Accounts banned for providing content manifestly violating the law or our terms and conditions: 138

Number of accounts who submitted unfounded notices repeatedly: 0

Was this article helpful?
0 out of 0 found this helpful