OpenAI's latest report details how it's fighting back against AI misuse / (Record no. 30960)

MARC details
000 -LEADER
fixed length control field 06566nam a22002897a 4500
003 - CONTROL NUMBER IDENTIFIER
control field OSt
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20251021081846.0
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 251021b |||||||| |||| 00| 0 eng d
040 ## - CATALOGING SOURCE
Original cataloging agency TUP University Library
Language of cataloging eng
Transcribing agency TUPM
Description conventions rda
100 1# - MAIN ENTRY--PERSONAL NAME
Personal name Reyes, Bob ,
Relator term author.
245 10 - TITLE STATEMENT
Title OpenAI's latest report details how it's fighting back against AI misuse /
Statement of responsibility, etc. by Bob Reyes .
264 #1 - PRODUCTION, PUBLICATION, DISTRIBUTION, MANUFACTURE, AND COPYRIGHT NOTICE
Place of production, publication, distribution, manufacture Manila :
Name of producer, publisher, distributor, manufacturer Manila Bulletin Tech News ,
Date of production, publication, distribution, manufacture, or copyright notice [2025]
300 ## - PHYSICAL DESCRIPTION
Extent 1 online resource.
336 ## - CONTENT TYPE
Source rdacontent
Content type term text
337 ## - MEDIA TYPE
Source rdamedia
Media type term computer
338 ## - CARRIER TYPE
Source rdacarrier
Carrier type term online resource
500 ## - GENERAL NOTE
General note Article published in Manila Bulletin Tech News, discussing OpenAI's report titled "Disrupting Malicious Uses of AI: An Update"
520 ## - SUMMARY, ETC.
Summary, etc. As Artificial Intelligence (AI) becomes part of everyday life, from helping professionals write emails to assisting developers in coding, concerns about how it can be misused have never been more pressing. OpenAI, the company behind ChatGPT, has released an important report titled “Disrupting Malicious Uses of AI: An Update,” offering an in-depth look at the strategies it employs to detect, prevent, and disrupt bad actors who try to exploit AI for harmful purposes.<br/>For people who are already experimenting with AI or are considering trying it out, this report provides valuable insight into how one of the world’s leading AI companies is keeping the technology safe and responsible.<br/>Understanding the Threat of Malicious AI Use<br/>AI’s capabilities have expanded dramatically over the past few years. Chatbots can now write essays, translate languages, generate code, and even simulate human conversation convincingly. While these tools are designed to enhance productivity and creativity, OpenAI’s report highlights how they can also be used for unethical or illegal purposes.<br/>Threat actors (individuals or groups that seek to exploit technology for gain) have been found attempting to use Large Language Models (LLMs) to automate phishing campaigns, craft more convincing scams, or even generate malware code. Some have also explored ways to use AI to spread misinformation, manipulate public opinion, or evade cybersecurity systems.<br/>This growing list of potential abuses shows the importance of building safeguards directly into AI systems. OpenAI has been working tirelessly to ensure its models, including ChatGPT, are not only powerful but also secure against exploitation.<br/>Proactive Defense: How OpenAI Prevents Misuse<br/>To stop harmful behavior before it happens, OpenAI employs a multi-layered approach to safety. Rather than simply reacting to misuse, it works proactively to anticipate and block malicious activities.<br/>The company’s defenses operate at several levels. First, “preventive mechanisms” are in place to block high-risk prompts or queries before they generate responses. These systems rely on both automated filters and ongoing training improvements to help the model recognize sensitive or dangerous topics.<br/>Next, “detection systems” monitor unusual or suspicious behavior patterns. For instance, if an account tries to generate thousands of phishing messages or repeatedly requests harmful content, OpenAI’s automated tools flag it for review.<br/>Finally, when misuse is confirmed, the company takes “responsive action,” which includes suspending accounts, refining model safety parameters, and sharing intelligence with partners. This end-to-end strategy allows OpenAI to learn from each incident and strengthen its protections over time.<br/>Collaborating with Security Experts<br/>OpenAI emphasizes that no single company can fight AI abuse alone. That’s why it partners with cybersecurity experts, law enforcement agencies, and technology firms such as Microsoft’s Threat Intelligence team. Together, they identify emerging patterns of malicious activity and coordinate responses to stop them at the source.<br/>These collaborations have helped uncover and disrupt coordinated attempts to use AI for disinformation campaigns and the creation of harmful code. By pooling expertise and sharing findings, OpenAI and its partners can react faster and more effectively to evolving threats.<br/>Transparency Builds User Trust<br/>For users, whether casual enthusiasts, creators, or developers, understanding what’s being done behind the scenes is crucial. OpenAI’s decision to publish detailed reports on its safety work demonstrates its commitment to transparency and accountability.<br/>This openness reassures everyday users that AI technology is not a “black box” operating without oversight. It also highlights the company’s belief that users themselves play a role in maintaining ethical AI use. OpenAI encourages everyone to report suspicious behavior, verify the information they get from AI tools, and use generated outputs responsibly.<br/>One of the report’s key messages is that AI safety is not a static goal -- it’s a continuous process. As models become more capable, new risks inevitably emerge. OpenAI acknowledges this and continues to invest in research that enhances model alignment, reduces bias, and improves context awareness.<br/>The company also recognizes that maintaining public trust requires constant dialogue between developers, governments, and the community. OpenAI’s transparency in documenting both successes and challenges serves as an open invitation for others to collaborate on building safer AI ecosystems.<br/>A Safer Future for AI Users<br/>Ultimately, OpenAI’s report shows that the company’s mission is not just about innovation: it’s about protection. While AI will always carry risks, those risks can be mitigated through thoughtful design, proactive monitoring, and shared responsibility between creators and users.<br/>For those exploring AI tools today, this report serves as reassurance that strong defenses are already in place. The same systems that make AI useful for writing, research, or creative work are being fortified to ensure those benefits are not overshadowed by misuse.<br/>In a world where Artificial Intelligence is reshaping industries and daily life, OpenAI’s ongoing efforts remind everyone that safety and innovation can -- and must -- evolve together.
538 ## - SYSTEM DETAILS NOTE
System details note Mode of access : World Wide Web.
610 20 - SUBJECT ADDED ENTRY--CORPORATE NAME
Corporate name or jurisdiction name as entry element OpenAI
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name entry element Artificial intelligence
General subdivision Safety measures
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name entry element Artificial intelligence
General subdivision Moral and ethical aspects.
650 #0 - SUBJECT ADDED ENTRY--TOPICAL TERM
Topical term or geographic name entry element Computer security
655 #7 - INDEX TERM--GENRE/FORM
Source of term lcgft
Genre/form data or focus term Online articles.
856 40 - ELECTRONIC LOCATION AND ACCESS
Uniform Resource Identifier <a href="https://mb.com.ph/2025/10/20/aws-outage-hits-major-sites">https://mb.com.ph/2025/10/20/aws-outage-hits-major-sites</a>
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Source of classification or shelving scheme Library of Congress Classification
Koha item type Online Article
Suppress in OPAC No
Holdings
Withdrawn status Lost status Source of classification or shelving scheme Damaged status Not for loan Collection Home library Current library Shelving location Date acquired Total checkouts Full call number Date last seen Price effective from Koha item type
    Library of Congress Classification     Online Article TUP Manila Library TUP Manila Library Online Article 10/21/2025   News Article 10/21/2025 10/21/2025 Online Article



© 2025 Technological University of the Philippines.
All Rights Reserved.

Powered by Koha