Intel has actually established an innovation that can effectively compare genuine videos and deepfakes in real-time using skin analysis.
Its new technology, FakeCatcher, can detect fake videos with a 96% accuracy rate and is the ‘world’s first real-time deepfake detector’ to return results in milliseconds.
‘Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did,’ said Ilke Demir, senior staff research scientist at Intel Labs.
The FakeCatcher deepfake detector works by analysing ‘blood flow’ in video pixels to determine a video’s authenticity in milliseconds.
Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human, such as ‘blood flow’ in the pixels of a video.
When our hearts pump blood, our veins change colour. These blood flow signals are collected from all over the face and algorithms translate these signals into maps.
‘Then, using deep learning, we can instantly detect whether a video is real or fake,’ said Intel.
According to the company, up to 72 streams can be analysed at once using one of its 3rd Gen Xeon processors. However, these processors are a bit more heavy-duty than the CPUs found in our laptops and desktop PCs, and can cost up to around £4,000.
Deepfake videos are a growing threat, costing companies up to $188 billion in cybersecurity solutions, according to Gartner.
It’s also tough to detect these deepfake videos in real-time as detection apps require uploading videos for analysis, and then waiting hours for results.
What are Deepfakes?
Deepfakes are videos and images that use deep learning AI to forge something not actually there. They are most known for being used in porn videos, fake news, and hoaxes.
The disinformation can be used to make events that never happened appear real, place people in certain situations they were never in or be used to depict people saying things they never said.
Primarily, deepfakes can be responsible for diminished trust in media.
In April, Ukraine had accused Russia of preparing to launch a ‘deepfake’ of President Volodymr Zelensky surrendering.
FakeCatcher can help in restoring trust by enabling users to distinguish between real and fake content.
Social media platforms might utilize the innovation to avoid users from submitting hazardous deepfake videos.
The innovation can likewise be utilized by wire service to prevent accidentally enhancing controlled videos. Nonprofit companies might utilize the platform to equalize the detection of deepfakes for everybody.
Get your need-to-know.
most current news, feel-good stories, analysis and more