Concrete, implementable policy ideas for ensuring the gains from AI flow to American families.
- Issuing Organization
- The Center for Shared AI Prosperity
- Date of Issuance
- May 2026
- Submission Deadline
- Rolling — submissions encouraged by June 15, 2026
- Compensation
- Track 1: No compensation; submissions are incorporated into research evaluation.Track 2: $3,000 per accepted final submission.
TL;DR
- Americans are, justifiably, very worried about AI-driven job displacement. They know we need better policy ideas.
- We need those ideas to come from AI policymakers, experts on the social safety net, card-carrying economists, and rigorous hobbyists.
- The Center for Shared AI Prosperity will work with researchers including Blue Rose Research to survey your ideas and assess Americans’ reactions to them on a rolling basis.
- We will work with individuals who submit the strongest proposals to build out their work and advance their ideas in the public debate.
Background and Purpose
Americans already have strong views about who should benefit from AI. The Center for Shared AI Prosperity is a research coalition designed to hear those views and act on them, by cultivating, evaluating, and advancing the most compelling proposals to ensure that all Americans thrive as AI advances. We are focused on progressive visions for the U.S. economy in a near-term world in which AI has replaced and disrupted a large number of jobs. Recent national polling shows that redistributive proposals earn broad support when tied to dignified work, funded by taxes on those benefiting from AI, and linked to programs voters already trust. The question is what that looks like when fleshed out into real policy.
The Center is soliciting policy ideas on two tracks in order to encourage engagement from a range of thinkers. We want rigorous ideas from leading economists, policy researchers, academic institutions, labor leaders, and technologists. We also want ideas from people who do not traditionally participate in academic or think tank work and who may not have the traditional credentials expected in policymaking, but who have something novel and thoughtful to contribute. In both tracks, we invite ideas that go beyond diagnosis to offer concrete, implementable public policy proposals. We are interested in ideas that are grounded in rigorous analysis, responsive to real-world political and institutional constraints, and attentive to both short-term transition needs and long-term structural change.
As phase one of this work, the Center will:
- Work with policy leaders and Blue Rose Research to evaluate and test the substantive and political rigor of submissions;
- Publish and promote a wide range of popular and actionable policy proposals; and
- Advocate for compelling visions of a just economy in the case of significant AI-driven job disruption and displacement.
Overview of Request for Ideas
There are two tracks for participation. You can choose to submit ideas for one or both tracks.
Track 1
Track 1 aims to expand the scope of participation to a broader range of people than those who traditionally engage in academic or think tank work. If you think you have an idea that politicians, policymakers, and elected officials need to hear about how our economy should function after AI significantly disrupts the status quo, we want to hear it.
We encourage you to keep your submissions to a 500–1,000 word explanation of your idea, written to convince a politician. If you would like to submit a longer or shorter explanation, you may do so. If you have written a more in-depth piece on your idea, please provide a link or attachment in addition to the summary explanation. Please also include your name and contact information, plus any biographical information you would like to share.
There is no compensation for individuals who submit ideas under Track 1. The benefit of participating in this track is that CSAIP advisors and Blue Rose Research will review submissions, test or build out some of the most promising ideas, and help elevate the best ideas to national leaders. As we build out our work, we may invite some submitters to engage in additional conversations.
Track 2
Track 2 is an opportunity for individuals interested in producing more comprehensive, original policy proposals to apply. If you are interested in Track 2, please submit a proposal for additional work. CSAIP advisors will review proposals and select some submissions for further development. Accepted Track 2 final submissions receive $3,000 each.
Proposals should include:
- A 500–1,000 word description of each proposed policy proposal, clearly identifying which issue area from the scope of work the contributor intends to address;
- A one-paragraph biography for each member of the proposed project team, highlighting their most relevant experience;
- A CV or resume for each member of the proposed project team.
Selected projects will be requested to go on to producing original policy concepts addressing one or more of the following thematic areas. Applicants may focus on a single area in depth or propose an integrated approach spanning multiple areas.
1) Taxation and Revenue Policy
New or reformed revenue raisers that allow the U.S. government (federal, state, or local) to capture a fair share of corporate wealth generated by AI.
- How should policymakers define who/what should be taxed?
- What is the best vehicle(s) or base for new taxation? (e.g., cash flow, distributed earnings, token usage, compute spending)
- At what levels?
- How much revenue would be raised over what time?
2) Income Support and Social Safety Nets
New or reformed income support for workers displaced by AI.
- Who would be eligible?
- What would be the benefits?
- What would be the requirements (e.g., public service)?
3) Labor Market Restructuring and Workforce Development
Strategies for adapting labor markets to a post-displacement economy (e.g., reskilling and lifelong learning, public employment programs, job guarantee programs).
- Who would be eligible?
- What would be the benefits?
- What would be the requirements?
4) Ownership, Governance, and Stakeholder Models
Alternative models for distributing the gains from AI, such as public or collective ownership stakes in AI infrastructure and platforms, worker cooperatives and employee ownership models in AI-intensive firms, sovereign wealth funds capitalized by AI-sector revenues, and data dividends and other frameworks for compensating individuals for data contributions.
The Center is interested in proposals that evaluate potential constraints, requirements, or administrative burdens in response to perceived political pressures. These elements could include work requirements, benefits that only accrue over time, or other limitations.
The Center is open to proposals that go beyond the four issue areas described above.
The best proposals will grapple with how the policy would be implemented given uncertainty regarding the labor impacts of AI job loss (e.g., triggers or other strategies for how the policy would scale up) and propose strategies to mitigate the substantive and political risks of the policy.
CSAIP will move some Track 2 proposals into the testing and evaluation process. Accepted final submissions receive $3,000 each.
Each funded project will be expected to produce the following:
- Policy Summary: a 1,200–1,500 word, punchy, engaging summary of the policy proposal for an audience of policymakers and other influential thinkers. The piece should be written in the tone of a Substack post.
- Policy Memo: a concise policy brief (five to seven pages) summarizing key recommendations in accessible language, suitable for distribution to federal, state, and/or local policymakers and other influential thinkers. A template will be provided by the Center upon acceptance of proposal.
Looking for inspiration or new to the field? See our sample scenarios exploring how AI may impact the U.S. economy by 2030. These scenarios are prompts to get you thinking, not forecasts.
Research and Testing Process
The Center for Shared AI Prosperity is partnering with Blue Rose Research to conduct public opinion research on submitted proposals. Blue Rose Research develops cutting-edge products used by progressive organizations and causes, and their state-of-the-art tools, powered by machine learning and Bayesian statistics, are used to accurately capture public sentiment. Read Blue Rose Research’s current public research on AI here.
Eligibility
The Center is seeking submissions from:
- Academic researchers and university-affiliated research centers
- Independent policy think tanks and research organizations
- Nonprofit organizations with relevant policy capacity
- Individuals with a background in tax, budget, and redistributive policymaking or AI
- Individuals not described above who have an idea they want to share
Collaborative proposals involving multiple institutions or cross-sector teams are encouraged, though compensation will be awarded for each proposal regardless of the number of individual contributors.
Submission and Ownership Terms
By submitting ideas, proposals, or suggestions, all applicants acknowledge and agree to the following:
- The Center plans to publish final submissions for funded projects under Creative Commons Attribution 4.0 or a similar license. Authors are encouraged and retain the right to cross-post or publish submissions elsewhere, referencing support from the Center.
- By submitting an idea under Track 1 and/or Track 2, the applicant grants CSAIP a non-exclusive, perpetual, irrevocable, worldwide, sublicensable, royalty-free license to use or create derivative works of the submissions in any manner or media and for any purpose whatsoever at the sole discretion of CSAIP.
- CSAIP shall not be responsible for any costs incurred by the applicant in preparing or submitting a proposal. CSAIP makes no guarantee of confidentiality with respect to submitted materials. CSAIP shall have no obligation to make any use of any submission. The applicant represents and warrants they have all intellectual property rights necessary to make this submission under these terms.
- The applicant agrees to release, defend, indemnify, and hold CSAIP, and related persons, harmless from and against any and all liabilities in connection with any intellectual property or other claim related to the applicant’s submission of any policy proposal.
By submitting, you also agree to our full Submission Terms.
Additional Background on How CSAIP Views the Problem
Currently, AI capabilities are advancing faster than the political system is responding to them. The majority of Americans are rightfully anxious about the near- and long-term impact of the technology, and concerned that without action, AI could further solidify a system that too often seems rigged against their interests. Job displacement is already underway. But Americans aren’t anti-technology: they’re optimistic about what AI can do yet wary of who will capture its benefits. Attitudes among progressive thought leaders are forming now, and no one is providing a coherent framework to guide them. Some are drifting toward reflexive skepticism. Others are adopting Silicon Valley’s language wholesale. Neither posture reflects what Americans actually think or what they want to hear from their leaders.
There is no prominent redistributive economic framework for AI. Existing AI policy work falls broadly into two categories. The first is safety and governance — essential work, but focused on regulating model development and deployment rather than the economic transformation. The second is the application of AI as a justification for existing small-scale policy priorities: workforce development programs, digital equity initiatives, education funding. These efforts have value, but they do not grapple with the scale and speed of the coming transition on its own terms. While researchers have begun to map out the economic implications of AI disruption, what is largely absent is original economic thinking that starts from the disruption itself and asks what a genuinely new social contract would require. The latest polling shows that redistributive proposals earn broad support when tied to dignified work, funded by taxes on those benefiting from AI, and linked to programs voters already trust. The question is what that looks like when fleshed out into real policy.
AI capabilities are advancing on a timeline that will force policy choices in the near term. The economic consequences are arriving in step. Employee concerns about AI job loss have jumped from 28% to 40% in the last two years. A 2025 Stanford study using payroll records from millions of American workers found a 16% relative decline in employment for early-career workers in AI-exposed occupations since late 2022 — while older workers in the same roles saw continued growth. White-collar sectors that account for 40% of U.S. GDP have been cutting jobs on net for three years despite continued economic expansion. Financial analysts have begun to consider how quickly these layoffs could snowball into a near-term financial crisis on the scale of 2008, before the next presidential election. Most of these workforce changes will come from the technological advances themselves, but as employers anticipate advances they may begin displacing workers even before AI can fully do the job.
Americans are forming views about AI that create a clear opening for leaders willing to listen to them. Blue Rose Research’s national research on AI opinions reveals several foundational dynamics:
- AI is rising in importance faster than any other issue facing Americans.
- AI is emerging in a moment of already great economic populism: 64% of voters believe America is rigged for the elite, and only 27% trust AI companies to do the right thing. All AI policy decisions will be received through this lens of collapsed institutional trust.
- Americans see the transition coming: 71% of voters believe large-scale AI unemployment is likely in the next decade. Americans lean slightly optimistic about AI’s development overall, but that optimism fades sharply when they consider long-term societal impacts and the speed of advancement.
Additional materials on the current state of Americans’ opinions can be viewed here.