Who’s Calling Me “Quiet?”
By Councilor Candace Avalos
ICYMI, I was just featured in an Oregonian article titled “Portland city councilor quietly uses AI to help craft her public pronouncements.” I’m used to people talking about me. I served for years on the Citizen’s Review Committee to push for police accountability; I was the executive director of Verde — one of Portland’s most vocal environmental justice groups; and I sat on the Charter Commission which helped advocate for a more representative city government. I’m used to hearing talk, but one word I haven’t seen used to describe me is “quiet.”
I’ve always been upfront with my beliefs, even when it would be easier politically to stay silent. I’m not afraid to be loud, especially when Portland’s power brokers have ignored the voices of the people I represent in East Portland for too long. So, miss me with the accusation that I’m doing anything “quietly.”
I want to be clear: the column you’re reading right now wasn’t written by AI (you can run it through an app if you want to check). But that’s not because of one article in the Oregonian. I’m writing this column without AI to show you that it’s always been my voice, my stories, and my leadership, no matter the process it takes to get to the final product. I understand that not everyone is going to agree with the use of AI, and that’s okay. We do need to have these conversations though, because AI is not going away.
First, I want to address some of the ways the article misrepresented my use of AI. I don’t frequently use AI on the dais, I use it occasionally and that’s what I told the reporter. I mostly use AI for personal use. Other councilors have used AI, including the Council President to analyze the speaking time of council members. As the reporter himself wrote in an earlier article, “the analysis, generated using an AI transcription program, reviewed 31 regular council meetings and work sessions held between Feb. 5 and June 5.”
Just like newspapers, businesses, and other organizations, I use AI “in a variety of ways to edit and draft” some communications. But I don’t use AI for everything, and all my communications are reviewed by me and my office. It’s a tool, and like any tool it can be used to cause harm, or it can be used responsibly.
I honestly thought the candid conversation I had would lead to a larger discussion of how local governments approach AI policy. The article that came out missed the mark. Really, I think what was published says less about me and more about how certain reporters have chosen to take every opportunity to get yet another dig on a young, Blacktina woman who’s not afraid to challenge our city’s establishment. If you’ve been following the news over the past couple months, you’ll know exactly what I’m talking about.
I’m always looking for ways to be more efficient. Since I’ve been in office, it often feels like there’s not enough hours in the day to meet with community stakeholders, prepare and attend council and committee meetings, make visits around Portland, and knock on the doors of East Portlanders. AI is just one of the strategies I’ve used to maximize my time so I can focus on what matters most: helping constituents stay in their homes, resolving unfair fines, and fighting for East Portland.
I’m not a quiet person and I refuse to play by the rules of those used to setting our city’s agenda. I started this column so I could speak directly with you instead of relying on the media to represent me and my values. This new article is yet another example of why it’s necessary to have my own space. Sometimes this column may involve AI in the editing or drafting process, but I promise you that it’s always me behind the words.
AI has a host of challenges, from data privacy concerns to worker rights violations to environmental burdens. But pretending like AI isn’t out there won’t help us regulate it better. If you have thoughts or concerns, you can always reach out to my office. Having constructive dialogue is the only path to finding true solutions to the issues that face us.
To continue the conversation, I reached out to East Portlander Héctor Márquez, who has been thinking about the growing use of this technology and how we can best maximize its potential while minimizing the risks.
Héctor Márquez, Executive Director of Historic Parkrose
- Tell me a little bit of your background, what was your first experience with AI?
I was born in Mexico City and have a degree in Communication Science and Digital Media. My first exposure to IT came early—my stepdad was part of the team that designed the network for Mexico’s internet and later for the country’s electoral system. Watching that work sparked my interest in technology. Now he is one of the academics in Mexico that studies the impacts and uses of AI.
During college, I worked at a digital newspaper, which was my first step into professional communications. From there, I became a consultant for the Mexican government, supporting the IRS, the Senate, Congress, the Secretary of Education, and local governments, helping them improve their digital footprint.
About ten years ago, I moved to Portland and joined WeLocalize, a global company working in translation, localization, and data management for clients like Meta and Spotify. That’s where I began using automation—what I think of as the “preview of AI”—to streamline workflows so people could focus on more strategic tasks. One of my first projects involved training a machine to recognize words in different languages by pairing text with images and even detect small grammatical issues.
Later, I worked with Nike as a localization quality assurance expert, responsible for ensuring the quality of content across 21 languages and markets worldwide. That was right when AI tools were becoming accessible, and I began experimenting with large language models. I trained AI systems to produce translations that weren’t just literal but culturally accurate and natural for each region, which made it possible to scale multilingual content much more efficiently.
Now, as Executive Director of Historic Parkrose, a small nonprofit with very big goals, AI has become a vital tool. With limited staff but major responsibilities, I’ve learned to leverage AI to turn ideas into real projects—whether it’s communications, analysis, or planning. Used responsibly, it’s an incredible asset.
In short, I’ve been involved with technology and automation from an early stage, and I embraced AI as soon as it became available. For me, it’s never been about replacing people—it’s about using the tool to expand our capacity and impact. And honestly, I believe: if you’re not using it, you’re losing it.
- How do you use AI?
I see AI as a tool—like a shovel or a calculator—not as intelligence. The term “artificial intelligence” sounds marketable, but in reality, the machine isn’t thinking. It’s not truly processing information—it’s mirroring, it’s predicting. It’s very good at guessing what we want to see or hear, but the quality of the result depends entirely on the user.
For me, AI is an assistant, not a worker. If you don’t know how to use it, or if you’re not an expert in the subject you’re applying it to, the results will be mediocre and wasteful. But if you bring your expertise and use AI wisely, it can enhance your ideas, streamline workflows, and multiply your capacity.
I don’t use AI for things that humans can do better, like creating original art. I use it for what it does best: bouncing ideas, improving clarity in language, checking translations, drafting instructions, or building workflows that free my staff from repetitive tasks. In my work, it’s like a “language calculator”—a machine that helps me do more with less.
And the truth is, we all already use AI. Every day. Your email spam filter, your phone’s autocorrect, your Spotify and Instagram algorithms—that’s AI. The difference is whether we use it consciously and responsibly. That’s why I believe small and large organizations alike need to understand AI as a tool that enhances human intelligence, not one that replaces it. If we feed it good information and ideas, it can expand what we’re capable of.
- Why do you teach a class on ethical AI use in East Portland?
I teach a class on ethical AI use in East Portland because it’s crucial for small businesses, nonprofits, and the public sector to understand both the opportunities and risks of this tool.
On one hand, AI can save enormous time and resources. For a small team that doesn’t have the money or staff to manage every detail, AI can help polish complex instructions, translate communications, or generate drafts that would otherwise take hours. It’s not about writing emails or creating memes—that’s wasteful. It’s about leveraging the tool to do more with less.
On the other hand, AI comes with real costs. It consumes huge amounts of energy, and when misused, it spreads misinformation or creates fake images and text that can confuse the public. Just as older generations struggled with misinformation online, we now face an even bigger challenge with AI-generated content. That’s why training people to recognize what’s real and what’s fabricated is essential.
For East Portland, this work is especially important. We have many immigrant- and minority-owned businesses and nonprofits with small teams who don’t always have the time or resources to keep up with fast-moving technology. Meanwhile, large corporations are already using AI at full speed. If our local community doesn’t learn how to use it wisely and responsibly, we risk falling behind.
So my goal is twofold: to empower people with practical, ethical skills for using AI as a resource-saver, and to build awareness so we can recognize and guard against its dangers. At its best, AI is not a toy or a replacement for people — it’s a powerful tool that, when handled responsibly, can enhance our intelligence and strengthen our community.
- Is there anything specific you think people should keep in mind with AI use?
The most important thing to remember is that AI is not a toy. It’s accessible to everyone, but it’s not always being used in ways that are productive for humanity. My hope is that AI can free us from unnecessary work, help us understand ourselves better, and maybe even level the playing field a little more.
But AI is a double-edged tool. Just as it can create opportunities, it can also be used in harmful ways. Our society has a history of creating powerful technologies without always putting the right safeguards in place. That means we have to stay vigilant, because there are always private interests and actors who may not use AI in the public’s best interest.
The reality is, AI is here to stay. So the real question is: how do we use it responsibly, to leverage our collective power as a society? We need regulations, education, and strong guardrails to ensure AI is used to strengthen communities rather than exploit them. If we demand accountability and stay awake to how these tools are being deployed, AI can be part of building something better, not something more dangerous.
- Is there anything specifically you would tell elected or government officials about the use of AI?
To elected officials, my message is simple: you need to catch up. Right now, the legal and regulatory understanding of AI is far behind the technology itself, and that gap is dangerous.
AI is a powerful tool—it can help society solve problems, improve efficiency, and expand opportunity. But it’s also a dangerous one if left unchecked. Without strong oversight and clear regulation, AI can be misused in ways that undermine trust, destabilize communities, and even threaten the systems we’ve worked so hard to build.
This is not an area where “wait and see” is an option. Government must step in now to:
- Establish guardrails that protect the public from harmful uses.
- Ensure transparency and accountability from the companies building and deploying these systems.
- Support education and training, so communities understand how to use AI responsibly.
AI is here to stay. The question is whether our laws and policies will evolve quickly enough to ensure it serves the public good instead of private interests alone.
