OpenAI says GPT-4 AI cuts content moderation time down from months to hours
OpenAI, the developer behind ChatGPT, is advocating the use of artificial intelligence (AI) utilization in content moderation, asserting its potential to enhance operational efficiencies for social media platforms by expediting the processing of challenging tasks.
The AI company, supported by Microsoft, said that its latest GPT-4 AI model has the capability to significantly shorten content moderation timelines from months to a matter of hours, ensuring improved consistency in labeling.
Moderating content poses a challenging endeavor for social media companies like Meta, the parent company of Facebook, necessitating the coordination of numerous moderators globally to prevent users from accessing harmful material like child pornography and highly violent images.
“The process (of content moderation) is inherently slow and can lead to mental stress on human moderators. With this system, the process of developing and customizing content policies is trimmed down from months to hours.”
According to the statement, OpenAI is actively investigating the utilization of large language models (LLMs) to tackle these issues. Its extensive language models, such as GPT-4, possess the ability to comprehend and produce natural language, rendering them suitable for content moderation. These models have the capacity to make moderation decisions guided by policy guidelines given to them.
GPT-4’s predictions can refine smaller models for handling extensive data. This concept improves content moderation in several ways including consistency in labels, swift feedback loop and easing mental burden.
The statement highlighted that OpenAI is currently engaged in efforts to enhance GPT-4’s prediction accuracy. One avenue being explored is the integration of chain-of-thought reasoning or self-critique. Additionally, it is experimenting with methods to identify unfamiliar risks, drawing inspiration from Constitutional AI.
Related: China’s new AI regulations begin to take effect
OpenAI’s goal is to utilize models to detect potentially harmful content based on broad descriptions of harm. Insights gained from these endeavors will contribute to refining current content policies or crafting new ones in uncharted risk domains.
Furthermore, on Aug. 15 OpenAI’s CEO Sam Altman clarified that the company refrains from training its AI models using data generated by users.
Magazine: AI Eye: Apple developing pocket AI, deep fake music deal, hypnotizing GPT-4