Updated March 8, 2024

Thanks to the IPR Digital Media Research Center team for their considerations and input. 

With the increased use of generative AI and the potential for increased misinformation and decreased transparency in the research space, the Institute for Public Relations (IPR) has created a disclosure policy for labeling generative AI use in research-related content. Content labels help prevent disinformation, allow transparency for material sources, and help prevent plagiarism. This ensures openness, accountability, honesty, and rigor of the material that IPR creates and publishes. 

Considerations When Using Generative AI

●      Generative AI cannot verify the quality or accuracy of the work it draws upon and is prone to hallucinations (or making up content).
●      Generative AI output may contain biased information.
●      Generative AI stops collecting information at a certain period of time, so it does not use the most updated research or material.
●      Confidential or proprietary information should not be uploaded or fed to generative AI (including the responses of participants who have requested confidentiality or anonymity).

Therefore, IPR requires in-text disclosures of substantive generative AI use in its research-related materials. The research materials that require disclosure include:
●      Blogs, blurbs, and research summaries
●      IPR Research Letter articles
●      IPR Signature Studies
●      Public Relations Journal articles
●      IPR Deconstructing series articles
●      Presentation materials at IPR events or programs

Rules and Guidelines for AI Use in IPR-Published Work

●      Authors must disclose how generative AI was used and to what extent.
●      Generative AI should be used primarily for editorial assistance. AI should not be used as a co-author for a research paper, as it cannot take responsibility for the work.
●      Authors and creators must take full responsibility for all content created by generative AI, including the use of copyrighted material, and ensure the content is factual, credible, accurate, and supported by other reliable sources.
●      AI should not be used as a primary or secondary source. Instead, it is the author’s responsibility to track original content sources for proper attribution.
●      Each use of generative AI within a document should be addressed individually.
●   Authors are responsible for complying with relevant laws and regulations related to AI-generated content.

Generative AI use should be labeled or disclosed when used for:

●      The research process, such as collecting or analyzing the data
●      New content generation (visual or written)
●      Content a reader would assume to be human-created
●      Language translation (content should also be checked by a fluent speaker)
●      Editing that changes the style, voice, or composition of the writing

Generative AI typically does not require a label or disclosure for:

●      Idea or topic generation/brainstorming
●      Grammatical changes or other minor edits that do not change the overall content, style, or voice of the piece
●      Summarizing of material if it does not change the content

Ways to disclose:

●      In-text citation using APA style
●      A reference in a paragraph in the report
●      Footnote
●      Endnote

What should be included in the label or disclosure:

●      The generative AI program used (e.g., ChatGPT 4.0)
●      The prompt used (e.g., summarize these research findings in two paragraphs)
●      The section where generative AI was applied (e.g., introduction)
●      Who used the generative AI (e.g., Dr. Tina McCorkindale)
●      The date, if applicable

Here is an example of a footnote:

The introduction was created by inserting the bulk of the report written solely by Dr. McCorkindale, the primary author of this report, into ChatGPT 4 and then asking it to create two introductory paragraphs. Dr. McCorkindale checked the accuracy of the content and edited the introduction.

For more information about content disclosure, please refer to Dr. Cayce Myers’s IPR article, “To disclose or not to disclose: That is the AI question.” IPR suggests that if in doubt whether AI-generated content should be labeled, it’s best to over-disclose rather than under-disclose.

Please note: This policy will evolve to ensure it remains relevant and effective throughout changes in the AI disclosure landscape.

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter