The rise of large language models has shown that algorithms nowadays are more than capable of mimicking human behavior online.
A study from March 2023 revealed that participants could not accurately distinguish between human or AI text. Researchers worry that these models could be used as a tool for malicious acts.
Companies like Microsoft have implemented guardrails into their AI to prevent them from being used for misinformation and other types of schemes. However, many of these generative models are open-source or leaked, allowing anyone to use these models for their own gain.
It is increasingly difficult to prove that any user you interact with on the internet is a bot. Social media platforms such as Reddit and Tiktok have already placed community rules that restrict AI content from being submitted.
As our lives become more reliant on online interactions, it is important for these online platforms to establish protocols that can prove that an account is run by a human.
In this article, we will explain the requirements for such a protocol and look into advancements developed by Web3 applications to solve this issue of proving personhood.
What is Proof of Personhood?
Proof of personhood or PoP is a type of protocol that enables a network to verify that a real human is behind a particular event.
Decentralized systems can implement PoP mechanisms to prevent malicious activity from occurring.
What happens when a decentralized network lacks a way to verify humanness?
One of the most difficult Web3 challenges is finding ways to prevent a Sybil attack. This type of threat occurs when a user finds a way to use multiple accounts to gain an unfair advantage in a platform or network.
For example, an attacker can create multiple fake accounts on a platform like Twitter or Facebook. After gaining access to a large number of accounts, the attacker can use their reach to spread disinformation or manipulate public opinion.
Or in networks that allow for each user to vote, an attacker can create multiple fake identities to manipulate the results.
Proof of personhood protocols can prevent Sybil attacks by requiring individuals to prove that they are real human beings before allowing them to participate in a network.
The Threat of AI Models on Current PoP Methods
You may have already encountered a basic form of PoP with bot detection services such as recaptcha. Websites add these tests to ensure that the person using the service is an actual human. They often require you to perform a test that is easy enough for a human to solve but is much harder for computers.
For example, a common recaptcha test would ask the user to select all squares in a grid that have a bridge, stop sign, or stairs.
However, as AI models are becoming more advanced at image detection, these types of tests are slowly becoming obsolete. These tests also have one critical limitation: solving the test does not prove that you are a unique user.
A proper and secure PoP protocol must have a way to reliably prove that a profile belongs to an actual user and that the user is unable to create multiple accounts for himself.
In the next section, we’ll take a deeper look into the main requirements for proof of personhood mechanisms and how these characteristics can help set up global decentralized identities.
Requirements for Proof of Personhood
Here are some key properties of an ideal proof of personhood protocol.
- The protocol must value privacy. The PoP mechanism must be able to keep the user anonymous
- The PoP protocol must also be resistant to fraud. Users should not be able to create multiple profiles on the same platform.
- In order for a PoP protocol to achieve global adaption, the network itself must be scalable and decentralized.
Before we look into some promising implementations of PoP protocols that aim to achieve all of the properties above, let’s take a look at the downsides of some of the most popular proof of personhood methods.
First, let’s take a look at the Turing test approach. You’ve certainly encountered one of these tests before if you’ve ever had to solve a captcha online.
Have you ever noticed that these tests are becoming more complicated to solve? AI is reaching a point where challenge-response tests like understanding an image are now a trivial task. Malicious actors can also use services that rely on a team of human users who are assigned to solve these tests at scale.
Another common PoP approach is identity verification. Most financial institutions follow some form of KYC (Know-Your-Customer) standard to manage fraudulent or malicious activity on their platform.
Suppose you want to create a new account at your local bank. The bank will typically require you to present some form of government ID. Social media platforms like Facebook and Twitter also use a form of identity verification. These platforms ask users to verify their cellphone number or email to prevent a single user from creating dozens of accounts on their platform.
While this method helps deter malicious actors, there are still many ways to bypass these limits. For example, a malicious actor can use techniques such as SMS spoofing to gain access to a large number of accounts.
Additionally, KYC identification is difficult to implement globally since not every person has an ID. Even if an individual had an ID, a centralized body still stores and controls these records.
Possible Approaches for Proof of Personhood
Web of trust
The web of trust approach for proof of personhood is a decentralized method of identity verification.
In this approach, users create and manage their own digital identities by creating digital certificates on a public platform. Users then wait for these certificates to be verified by other individuals in the community who are trusted and verified. This process creates a “web of trust” that vouches for the individual’s identity.
The more individuals who sign a user’s certificate, the more trusted and verified their identity becomes. This creates a network of trust that can help verify an individual’s identity online.
Projects like Proof of Humanity focus on building webs of trust for Web3. Users must upload a video of them talking with an Ethereum address clearly visible on a device or sheet of paper. The user must deposit a small number of tokens that will be returned once a registered user has vouched for your identity.
Biometrics
Biometrics is an authentication method that relies on an individual’s unique biological characteristics for identity verification. Since these characteristics cannot be lost or forgotten, biometrics can be used as a reliable method for proof of personhood.
There are several methods of biometrics with varying degrees of difficulty in implementation.
Fingerprint biometrics involves using an individual’s unique fingerprint patterns to verify their identity. Fingerprint biometrics are widely accepted as a convenient method of proof of personhood in government and business settings.
Users can also verify their identity through the use of face biometrics. Platforms can use facial recognition technology to match a user’s face to their government-issued ID or other documents. The success of Apple’s Face ID system has shown the feasibility of face biometrics in mobile devices as an alternative to passcodes and fingerprint biometrics.
Another potential method is the use of iris biometrics to scan the unique patterns found in an individual’s iris. Researchers argue that iris biometrics is more accurate than face recognition and fingerprint biometrics. Iris patterns are more unique than fingerprints and remain relatively unchained as the individual ages.
One caveat of iris biometrics is that scanning the user’s iris requires specialized devices.
The privacy-focused digital identity platform Worldcoin plans to use custom hardware called the “Orb”. The device issues proof of personhood credentials that AI will have a difficult time forging. The Orb also keeps the user’s information safe by deleting all photos after every verification.
Conclusion
As decentralized applications find more real-world use cases, developers need to integrate ways to prevent malicious actors from taking advantage of the system. Proof of personhood mechanisms is a key part of keeping these platforms secure and reliable.
Research on proof of personhood approaches should also focus on the danger of attackers using AI to fool the system. If AI has the ability to emulate any person’s face and speech, online platforms could be at risk of being overrun by fraudulent and malicious profiles posing as real humans.
What do you think is the best way to approach the issue of digital identities in the age of AI?

Leave a Reply