research funding database

Safe use of AI in research and writing

Share this post

Safe use of AI in research and writing | Guest Author Perspective from Dr. Anupama Kapadia, Department Head, Enago Life Sciences & Publication Support | Editor-in-Chief, Enago Academy. An image of Anupama at right.

Preface: There are varying opinions about how to ethically use AI in research and writing while ensuring the integrity of data an the research lifecycle. We recently spoke about this topic with Dr. Anupama Kapadia, Department Head, Enago Life Sciences & Publication Support and Editor-in-Chief, Enago Academy.

1. What are the key ethical concerns when using AI tools in research and scientific writing?

AI tools have matured rapidly over the last few months yet surprisingly, the primary concerns still revolve around data privacy, intellectual property, and the risk of inaccurate outputs. It’s crucial for researchers to understand that while AI can streamline data analysis and content creation, it also brings challenges such as ensuring the confidentiality of sensitive data and accurately crediting sources. Additionally, AI’s capacity to generate content needs careful monitoring to avoid the dissemination of misleading information.

2. How can researchers ensure the integrity of data when using AI in their studies?

To safeguard data integrity, researchers should integrate robust validation and verification practices within their workflows. This means not only checking AI outputs against known data but also conducting periodic reviews and audits. It’s like having a continual feedback loop that checks and balances what the AI is producing, ensuring it aligns with ethical research standards.

3. Can you discuss the implications of AI in plagiarism detection and its impact on academic honesty?

 AI tools are revolutionizing plagiarism detection by identifying not just exact copies but also paraphrased content or improperly cited sources. This technology significantly upholds academic integrity. However, it’s also important to calibrate these tools to distinguish between common knowledge and genuine instances of plagiarism, preventing any undue allegations.

4. What are some guidelines researchers should follow when using AI to ensure ethical compliance?

AI ethics are fundamental to maintaining trust. When AI tools are used responsibly, they enhance the research’s credibility by providing replicable and accurate results. Ethical guidelines ensure that these tools are not used to cut corners or embellish results, which could otherwise erode the scientific community’s trust.So to answer your question, researchers should start with full transparency, declaring the use of AI tools in their work. They should also be aware of and actively mitigate any biases these tools might carry. Regular training and reading on the latest AI developments and ethical standards can also help keep misuse in check.

5. What is your perspective on the balance between AI-driven efficiencies and the potential loss of human touch in scientific communication?

My favorite question! It’s all about balance. It always will be. AI offers incredible efficiencies, speeding up data processing and content generation, but the human touch is irreplaceable—especially in interpreting nuanced scientific information and ethical considerations. The goal should be to use AI to augment human capabilities, and absolutely not overshadow them.

6. How can institutions foster an ethical AI culture in research and publication?

Institutions can lead by example by implementing clear AI usage policies and providing ongoing education on ethical AI practices. Encouraging open dialogue about the implications of AI, and setting up a committee to oversee AI ethics, can also help maintain an ethical framework that evolves with technological advancements.

7. How can AI enhance the development and writing of grant proposals?

AI can be a game-changer in grant writing by helping researchers identify the most relevant funding opportunities and tailor their proposals accordingly. AI tools can analyze vast amounts of data to predict which projects might appeal to specific funders based on past funding patterns. Additionally, AI can assist in refining the narrative by ensuring clarity and impact, which are crucial for standing out in competitive grant applications. The idea is to use AI to streamline the preparation process while elevating the quality of the proposals.

8. What potential does AI hold in transforming the review and assessment processes of grant proposals?

In the review process, AI can significantly reduce the burden of initial screening by automatically evaluating proposals against specific criteria, such as alignment with the funding body’s goals and the completeness of the application. This allows human reviewers to focus on more nuanced aspects like the feasibility and innovation of the proposed research. Moreover, AI can provide a more consistent and unbiased assessment by mitigating personal biases that might affect human reviewers. This could lead to fairer and more objective decisions on which projects receive funding.

However, note that open models of AI should not be used to do any proposal reviews or assessments. Most funding bodies like NSF already have guidelines around this.

9. What are some of the potential drawbacks or risks associated with using AI in the grant proposal development, review, and assessment processes?

While AI offers many benefits, it’s not without its challenges. One risk is over-reliance on AI recommendations, which could potentially overlook innovative but unconventional proposals that do not fit typical patterns recognized by the AI. Another risk is the issue of transparency; relying on AI algorithms that aren’t easily understood or scrutinized can make it difficult to explain funding decisions, potentially undermining trust in the funding process. It’s vital to address these issues thoughtfully to fully leverage AI’s potential without compromising fairness or innovation.

10. What future developments do you foresee in the intersection of AI and research ethics?

 I anticipate advancements in AI that will more deeply integrate ethical decision-making within algorithms, potentially even simulating ethical review processes. Also, as AI tools become more embedded in research, I expect a surge in ethical guidelines that adapt to these new technologies, ensuring they complement traditional research methodologies. So in sum, establishing strong ethical guidelines, employing secure AI systems that ensure transparency, and enforcing strict repercussions for ethical violations will be key measures. For AI ethicists like me, it’s also about cultivating a research culture that values integrity and transparency over merely impressive results.


Special thanks to Dr. Anupama Kapadia, Department Head, Enago Life Sciences & Publication Support and Editor-in-Chief, Enago Academy.

More about  Dr. Anupama Kapadia, Department Head, Enago Life Sciences & Publication Support and Editor-in-Chief, Enago Academy

Dr. Anupama Kapadia is a licensed specialist in Physical Medicine and Rehabilitation (Orthopedics) by qualification and a scholarly publishing professional by vocation. She currently heads various portfolios related to medical communications, publication support, research integrity, and author training. With 15+ years of experience, 10+ publications and posters, and 10+ white papers/survey reports, she also actively leads several projects related to fulfilling requirements of clients by enhancing the editorial, peer review systems, and author services of Enago.

She is a member of and volunteers in professional associations such as ISMPP, MAPS, COPE, ISMTE, and EASE, which focus on networking and knowledge sharing within the community and help establish best practices among stakeholders. She is also an active member of Peer Review Week and Open Access Week planning committees.

Would you like to be featured on our blog?

Get in touch with us!

phd student funding

Sign-up for the
monthly funding newsletter

unsubscribe at any time