Issue advocacy and public affairs campaigns are never easy — and it’s never stagnant. The landscape is constantly evolving, and a major part of today’s evolution is artificial intelligence (AI). It’s a buzzword. It’s a step stool. It’s a game-changer. It’s also a confounding forest of new technology and skyrocketing possibilities.
With these new developments come a lot of opinions and promises, from streamlined workflows to immediate policy summaries. If you find it all a bit daunting, you’re not alone. It’s a complicated system that requires a careful balance of opportunity and risk.
At Beekeeper Group, we’re enthusiastic supporters of your AI journey. We know what advocacy organizations need to succeed, and we’re here to work with you so you can navigate this new terrain with confidence and clarity.
With AI, you can draft messaging that relies on sourced and analyzed data, build straightforward A/B testing for communications, enhance your visual and design concepts, and so much more. You can bolster your SEO on the content that matters to you and make it easier for your advocates’ letters to get to legislators. If you’re not careful, you can also let falsehoods slip through the cracks or lose the trust of your most loyal advocates. Took a real turn there, but, truly, it’s all up to how you play it.
Let’s be clear: we’re not AI vendors trying to sell you the next shiny tool, nor are we tech evangelists proclaiming it as the solution to every problem. Instead, we’re seasoned pathfinders who’ve been charting the digital advocacy landscape since 2010. With hundreds of clients ranging from AI enthusiasts to cautious skeptics, we’ve trekked through the ups and downs of emerging technologies, gaining valuable insights along the way.
We didn’t write this report to tout the miracle of AI or stoke fear about its shortcomings. Our white paper serves as a map and compass for your journey through this modern ecosystem, with all its complexities and charms. You’ll find careful considerations, a list of resources, practical checklists, and other trusted resources to equip you for your future advocacy. We’ve also included insights and frameworks from our network of experts that will help you develop a strategy that aligns with your values.
Whether AI feels like a promising innovation or a daunting complication, this guide will help you find your path. With Beekeeper Group as your guide, you’ll be equipped to explore AI thoughtfully and responsibly, leveraging it to drive meaningful growth while staying true to your mission.
Written with the insights of 80+ non-artificial practitioners!
We didn’t use AI to write this guide – but we did use one very unique thing: YOU! Over 80 advocacy professionals attending our annual Buzz Advocacy Summit in July 2024 provided insights and explored the potential of AI in advocacy, and directly contributed to the list of practical AI applications for advocacy work included in these pages.
What does an advocacy practitioner need to know about AI?
Your organization has a mission and it isn’t to send emails and analyze advocate data. To get to where you need to be, you must be able to center your efforts on strategy, creativity, and connection. You know that, we know that, and the people who turn to AI to streamline their more tedious tasks know that.
AI can help you focus on what matters. It can improve your efficiency in everything from drafting campaign messages to digging through data. This technology brings tremendous opportunity, but it also comes with some risks. To use it well, and responsibly, you must create internal guidelines and best practices.
You’ll need clear, thoughtful guidelines around the following:
- Careful Consideration of Inputs: Many AI tools thrive on data, but if you are going that route you need to be sensitive to uploading a lot of proprietary information into an off-the-shelf tool.
- Data Privacy and Security: Define strict rules for how your organization handles sensitive advocate data. Ensure AI tools comply with privacy laws and protect Personally Identifiable Information (PII).
- Transparency: Set practices to disclose when AI is used, whether in communications, content creation, or data analysis. Transparency builds trust with your advocates and stakeholders.
- Human Oversight: Establish a balance between AI-driven tasks and human judgment. AI should support your work, not replace the critical decision-making process that advocacy requires. Ensure your team remains the final gatekeeper of all strategic choices.
- Ethical Use: Develop clear guidelines on where and when to deploy AI, especially in areas like public policy analysis or research, where human expertise and integrity should always take precedence.
You want to use AI in the most positive, functional way possible, and you want to do it in a way that aligns with your values, mission, and ethical standards. By building effective, reliable internal guidelines and expected practices around AI usage, you’ll ensure the foundation of your work remains strong and secure.
Once you have your footing set, you can move on to the next step: testing your practical knowledge and beginning to consider your AI plans. We’ve built a few processes to encourage these deliberations. First, check out Don’t Get Stung! Frameworks for Internal and External Advocacy & AI Policies. It will help you set the guardrails for using AI effectively and ethically. Then, dive deeper with Brainswarm Results: Real-Life Prompts to Inspire You to Use AI in Advocacy, a collection of creative ways to apply AI to your daily advocacy efforts. After that, you can roll up your sleeves and get down with Tools & Resources for Your Advocacy AI Future, where you’ll find everything you need to start implementing AI-powered tools in your organization.
AI isn’t a magic fix. This isn’t The Wizard of Oz, where all you need to do is click your heels and you’ll be home free. AI is a tool to help you work smarter, engage more meaningfully, and deliver advocacy that’s sharper, faster, and more impactful. The buzz is real, and so are the benefits. It just takes some work to find them. If you can set up an AI foundation that will safely harness the potential of this tool, these gains can be yours.
Don’t get stung!
Frameworks for internal and external advocacy & AI policies
You can’t just jump into AI without a care, like a pool-loving kid at summer camp. You need a plan, and it needs to be a good one. You should first develop a well-considered AI-use policy to ensure your team understands the acceptable and unacceptable use of this critical tool. These guidelines will establish clear guardrails for your team and remove any unnecessary ambiguity.
Here’s how to do that:
Consider purpose and scope separately when defining your initial AI usage policy — and make sure you understand what both terms encapsulate. Purpose is the why, while scope is the how. To determine your purpose, pull together any potential benefits that AI could bring to your team. To decide on your scope, get a good understanding of exactly how your team can use these tools.
Remember that you’re building an adaptable policy. It’s meant to shift as things change and grow. With this initial plan, you should create something that you can adjust as your organizational needs move. It’s a good idea to start your policy at a relatively reserved level and establish a future roadmap for adjustments since it’s easier to expand a policy than restrict it. Moving through this progression will allow you to gather feedback and make new decisions about platforms and use cases as they arise.
Tips on defining purpose
Starting with purpose is usually best. Before jumping into your AI plans, you’ll want to get a good sense of your goals for the technology. Ask yourself why your team wants to use AI, and then go from there.
Answers to that question typically fall into a few categories:
- We want to save time by automating tasks.
- We want to accomplish our goals using fewer resources.
- We want to micro-target messaging to audiences without manually rewriting it.
Defining your organization’s purpose for using AI will set your team’s expectations and ensure your next choices further those goals.
Tips on defining scope
After you’ve determined where AI fits into your overall strategy, the next step is defining how it will be applied. Consider these questions:
- Who is eligible to use AI tools?
- Who is responsible for overseeing our AI policy and who are the stakeholders?
- When and where can our team access these tools?
- How much should we allow AI to integrate into our workflow?
These are difficult questions and they need to be considered carefully. To help you work through them, we’ve examined them in detail. Here’s our rundown:
We’d bet most people in your organization could benefit from some form of AI, but it’s up to you to decide when and where these tools are appropriate. There is plenty of ambiguity in this AI ecosystem, so a thoughtful and practical policy will get everyone on the same page.
Who is eligible to use AI tools?
Your AI policy shouldn’t necessarily focus on individual roles but, rather, on the inputs and outputs these roles handle. For example, you may want to allow anyone who writes membership emails to use an AI for that responsibility or not allow anyone who designs social media posts to rely on those tools. When making these decisions, you should specify what type of information is safe to be introduced into an AI prompt and how AI-generated content should be reviewed for accuracy and reliability.
You might run into situations where AI usage should be restricted for certain employees. It’s likely that, in those circumstances, you’ll want to provide restrictions around particular tasks. We’ll dig into this option in the fourth question of this section.
Who is responsible for overseeing our AI policy and who are the stakeholders?
You need to know who is running the AI show in your organization, and establishing an AI Governance Board early in the process can make that clear to everyone. Believe us, you’re going to want to institute a formal, transparent process for what could easily turn into an informal, vague practice. With a board, your team members will be able to monitor decisions and understand why they’re made.
An AI policy is only effective in the world in which it was written. When the technology changes, the policy needs to change with it. It will be the board’s responsibility to review any technology advancements and update the policy to include any new tools that could be acceptable.
Once you’ve figured out who is responsible for AI oversight, you can determine your AI stakeholders, both internal and external. These stakeholders will be people who are impacted by the AI decisions that you’re making for your team. For example, internal IT staff members will be concerned about security, while external members may be directly affected by AI-assisted content creation. Each one of these stakeholders offers insight into the appropriate role of AI in your organization.
When and where can our team access these tools?
It’s not just what you send into AI systems that can cause security or privacy risks; it’s also how you do it. To tackle that concern, you’ll need to delineate when and where your team can access these tools. You’re going to have to put some bumpers on this bowling lane because a free-for-all is only going to bring headaches.
For instance, even if you’ve determined which AI tools are acceptable and secure, if an employee were to access them from a home computer, using your organization’s proprietary data, they could unintentionally initiate a breach. To avoid this, you might determine that all AI use would be limited to organization-issued computers and networks.
How much should we allow AI to integrate into our workflow?
The most pressing question when determining scope is how deeply you should allow AI to penetrate your work. You’re going to need to figure out the right balance, and that’s not a simple decision.
Let’s cover the bases before you make a choice that will set the tone for the rest of your AI policies.
There are three primary levels of integration, each one with its own set of pros and cons. From least permissive to most permissive, here are those levels:
- Restricted use – AI tools are prohibited; no questions asked.
- Limited use – AI tools are generally prohibited except for specific, defined use cases.
- Open use – AI tools are freely allowed.
A restricted-use policy will completely prohibit AI use. It’s the most straightforward policy and easiest to enforce, but it’s not necessarily a practical solution for organizations seeking to navigate an advocacy ecosystem that is increasingly turning to AI tools. While this policy may not be pragmatic for your organization in the long run, there is one situation in which it makes excellent sense. If you’re currently detailing your AI policy, a restricted-use policy can provide some guidance until those details are settled.
Meanwhile, limited-use policies provide flexibility and options for AI use. You’ll be able to choose when and how you turn to AI — and the impact that it will make on your efforts and goals. This policy is a good jumping-off point for organizations that haven’t yet implemented AI into their workflows. Examples of limited-use policies include allowing teams to use one specific tool, such as Google Gemini or ChatGPT, or limiting the use of AI to brainstorming ideas while restricting AI use for drafting messaging. After exploring these approvals, you could open up your policy to include additional tools and outputs.
Within a limited-use framework, you can also explore ONLY allowing locally installed AI tools. This could look like running an LLM (language learning model) on a laptop and not having it run on the internet to process things. It is a little more complicated to set up, but offers a lot more in terms of privacy.
Finally, an open-use policy is the most permissive system, giving team members the complete go-ahead to determine which tools and uses are most appropriate for them. There is still an option in this seemingly free-rein plan to overlay some parameters. Even in an open-use policy, you could prohibit team members from uploading proprietary data into AI tools or from using AI-generated content in member communications.
After answering these four scope questions and considering the purpose of your AI use, you’ll have the answers you need to build the first layer of your initial AI policy. Your team will get clarity about how and why they’re relying on these tools — and you’ll feel like the plan-creating master you are. With that confidence, you can move on to the next layer.
With great power comes great responsibility. Yeah, we know, a too-on-the-nose movie line, but it’s true – and remember Peter Parker got a family member killed by messing around with powers he didn’t understand. AI is an impressive, dynamic, and impactful tool. It’s also a risky endeavor when attempted by untrained or unbound hands. That’s where you come in. You’re going to need to create boundaries for your team and their AI use.
In an advocacy organization, where trust is truly the foundation of every success, the use of AI must be governed by clear and strict ethical and responsibility guidelines. Luckily, there are standard considerations and expectations to which we can turn.
It’s critical to quickly establish the ethical and responsibility guidelines for your organization, and you can do that by reviewing the possible impacts of your AI use, how you expect your team to behave with these tools, and the effects that your AI use could have on your staff. Once you’ve established those guidelines, you need to be able to communicate them clearly and effectively.
We’ve laid out some of these considerations here:
Set your ethical guidelines
AI can be great for your workflow productivity, but any thoughtful organization needs to consider the dilemmas that come with it. Life is all about balance, isn’t it? There are biases that need contending with, data security concerns, employee retention questions, and more. If you don’t grapple with this, every AI effort you make will be compromised.
We’ve compiled a list of some of these considerations, but you’ll need to turn inward and determine which guidelines you need to include in your policy to ensure your team’s ethical AI use.
Here are our initial considerations:
- Algorithmic bias and discrimination: AI tools are trained with information available on the internet. Since internet content is influenced by the biases and discrimination of those creating it, AI often inherits those biases. Once that happens, the algorithmic bias will impact the responses that come from the tool. To tackle this problem, you’ll need real people who can carefully review AI outputs to clear them of discriminatory and biased information.
- Data security and privacy: AI uses a tremendous amount of data to generate outputs that seem like they could come from a human. With all that data comes ethical concerns about its collection and storage. To maintain secure processes, you’ll need to train your staff on methods for protecting security and privacy, especially as these tools get integrated into your systems and practices. This could also be another place for you to think about locally installed LLMs.
- Data manipulation and transparency: AI transparency is a top-of-the-list issue when it comes to trust. It can take many forms, but considering the tools you choose to use and your own labeling of AI-generated content is a good place to start. For the first, you should make sure any system you rely on has strong and transparent data collection, discrimination, bias, and security policies. For the second, you should clearly state which content is AI-generated across all platforms, including social media and messaging. It is also worth establishing a policy regarding what can be manipulated, and what cannot.
- Environmental sustainability: AI requires a tremendous amount of energy since the data centers needed run on massive computing power — and these data centers run on fossil fuels. For advocacy organizations grappling with environmental sustainability or concern for global climate issues, it would be imprudent to ignore these environmental impacts.
- Job displacement: For many people, AI sounds like job loss. People are worried that tasks that once required humans to complete can now be done by these tools, and that translates to a small workforce. Advocacy organizations concerned with issues of employment, labor, poverty, and workforce rights should consider the impacts and impressions that could be made after AI use.
Set your responsibility guidelines
In addition to managing these crucial ethical considerations, you’ll need to deal with the issue of responsible AI use. While ethical considerations are a baseline for trustworthy AI-generated content, a reasonable advocacy organization will be sure to add extra responsibility measures to ensure the most productive, secure, and supportive practices. We’ve all got to try to be our best out here.
Here are some additional worthwhile considerations:
- Human oversight: As helpful as today’s AI tools are, they’re useless to an ethical advocacy organization without thoughtful human verification, monitoring, and editing. As helpful as today’s AI tools can be, they can’t create content that’s above human review and monitoring. When turning to AI-generated content, you should be sure to clearly define your organization’s process for human review, including steps that should be taken to properly verify the accuracy of every aspect of the content.
- Copyright, plagiarism, and vetting sources: It’s essential to properly vet and reference every bit of information used in AI-generated content. No one likes a thief, people. The AI tool will pull from sources across the internet, so ensuring that everything is both accurately and appropriately cited is key to maintaining the integrity of your organization’s work. This will make sure you avoid copyright and plagiarism issues from improperly referenced details or issues of veracity when it comes to sources chosen.
- Erosion of fundamental skills: It’s irresponsible for an organization to ignore the impacts that AI use can have on its team members. Relying on AI tools too often can allow team members and advocates to begin to lose some of the fundamental skills that make them effective in their roles. It’s possible to see diminished written communication, verbal communication, and training retention abilities, but providing continuing education can help keep these skills sharp.
It’s important to stand behind what you decide when it comes to these ethical and responsibility guidelines. Your policy will only be as effective as its accountability requirements. After answering these questions and focusing on your ethical standards, you’ll need to determine how to communicate them and, then, how to enforce them.
Speak up! Communicate your policy to stakeholders
Every stakeholder deserves to understand if and how you’re relying on AI. Once you create your policy, you’ll need to communicate that in writing to both internal and external stakeholders.
For internal team members, that’s as simple as keeping them appraised of the policy and any ongoing changes. For external stakeholders, including members, advocates, and legislators, that requires tactful messaging that reinforces trust and confidence. You could add a short statement to your external communications that directs readers to your AI policy, display that policy on your website, or include a line on specific communications, when appropriate.
In both situations, communication is key to maintaining trust and reliability — values that can’t be sacrificed as you walk toward improved efficiency and connection.
At this point, you’ve nearly finished building an AI policy that will work for your team members, support your mission, and maintain the trust you’ve built between you and your stakeholders. Now, you need to make sure this policy can shift with the needs of your organization and grow with the advancements of AI. You can do that by establishing a timely feedback and monitoring loop. Basically, you’re going to need to get a lot of other opinions from a lot of other people.
A productive feedback and monitoring loop depends on both the quantity and quality of the feedback. Collecting data from both your internal teams and your external stakeholders can bring both.
Internally, consider polling your AI-using team members to identify pain points. For example, you could ask them to consider which tasks are most time-consuming and then think about expanding your AI usage policy to assist with those responsibilities. As AI adoption continues and new methods become available, you could ask team members to share new tools they’ve encountered and consider adding them to your list of acceptable options.
There are a multitude of tools available to collect feedback from external stakeholders, and each one can provide useful insights into their experience with your AI-generated content. For example, you can run A/B testing in your advocate email communications to identify and quickly adapt your AI-generated messaging. You can also compare an AI-assisted email to one written by a team member to determine whether AI is an effective communication tool for your team. Comparing open and clickthrough rates provides a useful barometer for messaging impact.
Knowing what works is integral to the success of any system, especially one that is consistently growing and changing. Common policies currently in place regarding AI may not be appropriate or relevant after the landscape changes, so it’s imperative that you analyze and iterate when it comes to these decisions. Your established AI Governance Board should be meeting frequently to consider these feedback loops, advancements in technology, and the need for shifting guidance.
How do you adjust?
Brainswarm Results:
Real-life prompts to inspire you to use AI in advocacy
It’s one thing to write about AI and talk about what an organization needs to be successful when using these tools. It’s another to pull all of these hopes, dreams, and responsibilities into reality. At Beekeeper Group, we understand that you’re looking for real-life impact, and we’ve got the resources to get you there.
We’re constantly reaching toward the future when it comes to advocacy and technology. While most teams thrive in brainstorming sessions, we take it further with a good brainswarm. During our 2024 Buzz Summit, we invited our 80+ attendees to join the fun and consider how they would use AI in an ideal world. We found the suggestions of our fantastic friends and clients to be so valuable that we compiled them here for you.
Create and optimize content
Compelling content is the cornerstone of any good advocacy campaign, but sometimes creativity doesn’t easily strike. If you’ve ever been stuck staring at a blank page or scrolling for inspiration, could we interest you in a bit of AI? Here are a few ideas we gathered for how AI can spark content development:
- Writing assistance: AI can be a useful jumping-off point for communications like social media posts, emails, and more. It can quickly write your content, leaving you with more time to adjust and elevate the copy.
- Personalized communication: AI tools can create issue-specific and personalized communications. Remember: The more context you share in your prompt, training it on your organization’s communications guide, tone, and voice, the better your initial starting product will be.
- Multilingual and accessibility support: AI models are improving their nuanced language translations, like by recognizing proper nouns and other specifics. AI tools can also help with ADA accessibility for websites by generating image and text descriptions. These are productive ways to enhance your messaging.
- Visual content: AI is changing how we approach visual content creation. From tools that generate images and videos from simple text prompts to AI-powered editing tools that bolster existing visuals, AI tools can open up your creative possibilities.
Bolster advocate data analysis and insights
Managing an effective campaign requires excellent analysis of your advocate and legislative data to make smart strategic decisions. Here are a few ways AI can help with data analysis and insights:
- Legislative analysis: AI tools can help with bill summaries, comparisons, and legal context analysis. With it, you can more easily assess and track legislation and understand how it may impact your stakeholders.
- Persona creation: AI data analysis can help review existing advocate data to identify patterns and trends among your audience — and then build demographic and psychographic personas. Through these personas, you can communicate more effectively, better target your advocates, and boost your advocacy engagement.
- Advocate data mapping: Practitioners can leverage AI to predict and suggest connections from advocate data, and then correlate these information points and create visualizations.
- Predictive analytics: With time, AI models will be able to predict potential voting outcomes for pieces of legislation. These predictions can help practitioners narrow down key states and lawmakers to target.
- Lawmaker background: AI can pull together lawmaker briefs to summarize a lawmaker’s bio, voting history, committee details, and more.
Build better campaign plans
Though AI can’t replace the nuanced understanding advocacy practitioners use when building an issue campaign, it can streamline the work of building campaigns — and it can help it move more quickly. Below are a few ways AI can help you find key targets, craft messages, and create campaign strategies:
- Identifying campaign targets: AI can analyze available data to help practitioners find the right constituent and legislative targets for an issue campaign, allowing you to pinpoint those who will be most receptive and likely to take action.
- Creating tailored messages: By incorporating available data or audience insights into your prompt, AI can quickly build campaign messages that feel personal and targeted.
- Building levels of engagement: AI can help you build a “ladder of engagement” for your advocates, generating ideas for different tiers and putting together an example point system.
- Strategic planning: AI can generate a full strategic communications plan. By including a timeframe, desired channels, and the overall campaign goal, practitioners can generate an initial plan for their campaign. You can even get its guidance with campaign budgets, media lists, and advertising plans.
- Building segmented emails and A/B tests: One of the most accessible use cases of AI is to generate email content specific to an audience and campaign. AI can create emails based on different audience segments and messaging that can be used for A/B testing.
Improve advocate training
- Building training session agendas: AI can build a training outline or agenda based on a prompt that includes the topic and training time. Though you’d need to personalize the training to your unique stakeholders and content, this exercise can streamline the process.
- Video editing tools: Different editing tools may use AI to improve audio quality, automatically generate captions, and even fully edit video clips based on analysis of the footage. These tools can help you create high-quality advocacy training content more easily and quickly.
- Generating scripts for learning modules and animations: Depending on the length and complexity of the topic, AI can generate rough drafts of scripts for animations, learning modules, and more. Having a clear but detailed initial starting prompt is key to developing a quality starting product.
- Adapting existing training elements to different audiences: Based on a comprehensive initial prompt, AI tools can analyze existing materials, like presentations and scripts, and generate new versions of the material adapted for a different audience.
- Building gamification models: AI can build gamification models for advocacy programs, creating tiers of engagement, point systems based on advocacy actions, and ideas for names and supporting language for the program.
Manage new frontiers of advocacy and AI
- Improving message deliverability to Congress: AI can generate multiple versions of potential letters to lawmakers that can be customized based on what we know about the advocate and what we know about the legislator. This could allow letters from advocates to more easily pass through email filters and hit lawmaker inboxes.
- Tuned AI models: Tuned AI models, or pre-trained AI models, are further customized using an organization’s specific data. This can improve the AI’s performance and understanding of a particular industry, making it a good choice for organizations that feel ready to fully embrace the technology.
- Website and video creation: Some design tools are making videos or whole websites on the fly, which could allow organizations to generate a hub completely from scratch.
- Search Engine Optimization (SEO): AI can improve a website’s search engine ranking by generating meta tags, meta descriptions, and different website keywords. It can also complete content and data analysis to demonstrate ways your web content can be optimized for SEO.
- Creating personalized advocacy results for legislative issues: AI can generate custom reporting, search results, and more — all based on an organization’s legislative needs. As advocacy tools continue to integrate AI into their platforms, more tools with these capabilities will likely be available.
Use Case:
Using AI tools for design ideation
AI tools that support design work are advancing and gaining popularity. From image generation to mood board development to video creation, advocacy organizations can turn to these tools for a variety of creative elements. The key is knowing which elements are best suited for this technology and maintaining the critical balance required between the tool and human enhancement.
We know all about this balance because we strike it in our own client work. Beekeeper Group uses AI to streamline processes, while always prioritizing quality and creativity. Our application of these tools has helped us brainstorm design ideas, develop creative concepts, and generate mood boards for complex projects. We appreciate the power of this technology — and find its outputs useful in pushing us along the right path, but we understand that it’s just the beginning.
When we use AI to generate a mood board, we always have a qualified designer react, respond, and recreate what they are seeing there. We treat the output as a jumping-off point that we can use to inform our direction on color, imagery, and general style. Mood boards help us establish different visual identities for a project and are a great tool for narrowing down ideas and creating a clearer vision of its creative direction. Instead of spending a lot of time fully brainstorming different concepts, we use AI to visualize ideas quickly and determine whether they have the potential for success.
Though AI can help us create stronger design work faster, it doesn’t replace the critical thinking that designers rely on when considering organization context, cultural relevancy, aesthetics, and overall direction. Just like AI use in other aspects of advocacy work, human oversight and expertise will continue to be essential in creating quality work.
There’s a powerful case to be made for including AI in your design efforts. At Beekeeper Group, when we turn to AI for mood board generation, we can generate a greater number of unique ideas that we can then shape into quality final products. There is still a lot to explore and question in this area, and, luckily for us all, exploration and curiosity are an important part of this industry.
Tools & resources for your advocacy AI future
The comforting thing about diving into AI with your advocacy is that you don’t have to start from scratch. There are several places to begin when experimenting, testing, and adopting these tools. To feel more confident in your decision, you can explore each tool’s usability, technology, and data security policies. With that information, you’ll be able to more thoughtfully integrate the right one into your workflow — then you can test and try and adjust to your heart’s content.
For our part, we’ve compiled a list of resources that may be helpful at the start of that journey, along with a checklist to assess your AI use.
Helpful AI Resources
- There’s An AI For That is an up-to-date list of tools that could support your AI goals.
- The Mercatus Center at George Mason University offers an easy-to-read primer on the core concepts of AI for policymakers.
- EY breaks down the top AI-related policy issues from a board and management perspective.
- The Bipartisan Policy Center hosts a robust library of blog posts, explainers, and reports diving into key concepts across the industry.
- The Responsible Artificial Intelligence Institute is a good resource for tracking standards, frameworks, and best practices for governance.
Your AI checklist: How can AI answer your advocacy needs?
As advocacy organizations increasingly explore and adopt AI tools, it’s crucial to approach implementation thoughtfully and systematically. This checklist is designed to help you navigate the complexities of AI adoption while maintaining the authenticity and effectiveness of your advocacy efforts.
Initial assessment: Should we use AI?
Before diving into implementation, consider these fundamental questions:
Strategic Value Assessment:
- Have we identified specific advocacy challenges that AI could help solve?
- Do we have the resources, including time, budget, and staff, to properly implement and oversee our chosen AI tools?
- Have we compared the cost-benefit of AI versus traditional generation methods?
- Are our stakeholders likely to be supportive of AI implementation?
Readiness Check:
- Does our team have the necessary skills to effectively use AI tools?
- Do we have clear goals for what we want to achieve with AI?
- Have we assessed the potential risks that AI could bring to our advocacy mission?
- Are our data management practices robust enough for AI implementation?
Organizational Fit:
- Does AI align with our organization’s values and mission?
- Have we considered how AI use might impact our authenticity with advocates?
- Are there specific areas where AI could enhance rather than replace human connection?
- Have we identified our “red lines” for AI usage?
Checklist:
Content generation and oversight
Content Mix Assessment:
- Calculate the percentage of AI-generated vs. human-written content
- Document which types of content are created with AI assistance:
- Social media posts
- Email drafts
- Bill summaries
- Advocate training materials
- Multilingual communications
- Visual content
- Set target ratios for the AI and human content mix by content type
- Track effectiveness metrics for both AI and human-generated content
Human Oversight Process:
- Establish clear review protocols for AI-generated content
- Identify team members responsible for AI content review
- Create guidelines for fact-checking AI outputs
- Document editing processes specific to AI-generated content
- Set up quality control checkpoints before publication
Checklist:
Campaign planning and implementation
Strategic Planning:
- Document how AI will be used in campaign planning
- Create guidelines for AI-assisted audience targeting
- Establish processes for AI-supported message testing
- Set up frameworks for measuring AI’s impact on campaign effectiveness
Advocate Engagement:
- Define how AI will support advocate training programs
- Create protocols for AI-assisted personalization
- Establish guidelines for AI use in gamification
- Document approach to AI-supported advocate journey mapping
Checklist:
Data analysis and insights
Data Management:
- Audit how advocate data is being used in AI systems
- Document which AI tools have access to which data types
- Review data retention policies for AI tools
- Establish data minimization protocols
- Create a process for regular data access audits
Analytics Implementation:
- Set up systems for AI-assisted legislative analysis
- Create protocols for persona development
- Establish frameworks for predictive analytics
- Document processes for AI-supported data visualization
Checklist:
Transparency and disclosure
Public Communication:
- Develop a policy on the disclosure of AI-generated content
- Create standard language for AI content disclosure
- Document where and when AI use should be disclosed
- Train team on transparency guidelines
- Monitor public response to AI disclosure
Checklist:
Innovation and future planning
Emerging Technologies:
- Create a process for evaluating new AI capabilities
- Set up a framework for testing AI-powered deliverability improvements
- Document the approach to AI-assisted SEO
- Establish guidelines for exploring tuned AI models
Checklist:
Regular review process
Quarterly Assessment:
- Review the effectiveness of AI implementation
- Update policies based on lessons learned
- Assess the cost-benefit of AI tools in use
- Gather feedback from team members
- Update training materials as needed
Checklist:
Success metrics
Performance Tracking:
- Define key performance indicators for AI implementation
- Set up a tracking system for AI-related metrics
- Create a regular reporting schedule
- Compare AI vs. traditional approach outcomes
- Document success stories and lessons learned
Conclusion
AI is complicated. We can all agree on that. Despite this complexity, it is also helpful, inspiring, and pretty fun — if you know what you’re doing. And if you don’t, we promise you can get there.
Once you’ve reviewed this white paper, you’ll be in an excellent place to start integrating these tools into your workflows. You should know the basics about advocacy and AI, understand how to define your purpose and scope, be able to establish ethical and responsibility guidelines, and appreciate the need for timely feedback and monitoring loops. You should feel confident in your ability to navigate real-world AI quandaries and have an easy-to-understand checklist of what it takes to effectively pull in AI resources.
However, don’t feel bad if this all still makes you a bit nervous. This is a safe space, and we’ve got you. We’ve been weaving through these complicated digital streets since 2010, and we know our way around this neighborhood.
If you’re ready to start bringing AI tools into your advocacy work, Beekeeper Group is just a call or message away. We’re here to help you use everything this modern digital landscape has to offer to achieve your mission.

Your adventure begins here.
Beekeeper Group knows that AI is a complicated frontier. We put together our AI report Navigating the AI Wilderness to help advocacy professionals like you understand the benefits, risks, and opportunities to utilizing AI in their advocacy work. Fill out the form to access this content. If you'd like to discuss how to approach AI usage at your organization, send us a message.