When any organization qualifies the types of surveillance they refuse to do w terms such as “domestic” and “mass” all critical thinkers must ask why those qualifiers and what are they saying they are Ok to do?
@katekaye
Personal account. Researching algorithmic & surveillance tech in Portland, OR at RedtailMedia.org. '25-'26 OSU fellow. 25 yrs as a journo reporting on data use, AI, etc. Music nerd, forest walker, raptor watcher and fan of the NY Mets. KEEPIN THAT PMA.
When any organization qualifies the types of surveillance they refuse to do w terms such as “domestic” and “mass” all critical thinkers must ask why those qualifiers and what are they saying they are Ok to do?
but a good event idea just the same - have a beer and read/discuss a privacy impact assessment
Just a followup re: my author experience @facct.bsky.social this year - my interactions w my senior area chair have been great & it seems other chairs truly care about ensuring inclusion & a genuine review process. Part of problem is big increase in papers submitted. A growing pains problem.
there are actually 3 new PIAs out just recently - if you can't find them let me know...
Has anyone out there seen a definition of "mass surveillance" or "mass domestic surveillance" in relation to just "surveillance"?
I've been marveling at media repetition of the term "mass surveillance" as though the other kind is not problematic.
The hours I've spent as Area Chair for @facct.bsky.social this year have been educational & fulfilling, my experience as a paper author has been so disappointing as to sour my attitude toward the conference:
-AI-based reviews
-lazy/absent area chair
-no recognition of my rebuttal
-sparse meta review
oh, that too !
“Pricing algorithm” means any computational process, including a computational process derived from machine learning or other artificial intelligence techniques, that processes data to recommend or set a price or commercial term within the jurisdiction of this state.
“Surveillance pricing” means when a person sets a price offered to a consumer based, in whole or in part, upon personally identifiable information gathered through an electronic surveillance technology, including electronic shelving labels.
Price-fixing algorithm means a software, system, or process that does both of the following: (1) Collects historical or contemporaneous information on a price, price change, or supply level of a good or service from two or more persons or from public databases. (2) Analyzes and processes the information described in paragraph (1). (3) Creates pricing models based on the analyses and processing described in paragraph (2).
Here are some definitions from recent CA bills for "Pricing algorithm," "Surveillance pricing," and "Price-fixing algorithm."
Following my initial reporting on the water bureau project, the bureau provided new information to Portland City Council regarding data use, privacy, and algorithmic assessment plans, but many questions remain unanswered.
In Portland, the Water Bureau is piloting an algorithmic pricing system to automate bill discount levels. I've reported extensively on this otherwise ignored project. Lawmakers in CA have introduced several bills to regulate algo pricing, a very nuanced area. My reporting and more on CA follows:
Drafting an article related to my "Trust in the Age of Generative AI" @oregonstate.edu fellowship project.
Working title: Math, Lies and Cassette Tapes
Sharing this from a self-proclaimed 'organized data lover" I've interviewed over the years:
"They have me zig-zagging across the country with a minor when a far more reasonable option existed..."
"...No human would choose that routing. An AI algorithm did."
Will add that a lot of comments here reflect the sort of mandates journalists have for every story where there are reasons to bring in multiple voices and perspectives. I don’t see academic or other research needing that. It can even weaken it.
Saw this happening in some FAccT paper reviews that came through my area chair desk recently. Reminds me of “solutions journalism.” I think discussing this as future work makes sense for some but it’s not a necessary requirement for critiques that can stand on their own and serve a valuable purpose.
In a @facct.bsky.social paper I'm reviewing, authors say they're applying a measurement technique that's known not to work well but they're using it anyway because it's "widely used."
That tracks.
I'm looking forward to sharing with FOIA workshop participants how I've used public records requests as one tool in reporting these stories about Portland city government use of algorithmic and surveillance tech:
What happened when an AI alignment director ran an AI agent after testing it in a toy email account. Doh.
www.businessinsider.com/meta-ai-alig...
I'm an area chair for a conference right now and seeing problematic reviews that were obviously written by or at least 'aided' by AI. It's a terrible precedent. Why would anyone take the time to submit work to a conference that allowed AI generated reviews? It's demoralizing.
image states FAccT guidance on use of LLMs for reviews: "No, you should not use LLMs to write reviews. ...However, the use of third-party generative AI tools or software (local or cloud) is permitted, though not encouraged, to check fluency, formatting, grammar, spelling, punctuation, and language translation of the review that you wrote."
As a @facct.bsky.social area chair I'm seeing some excellent reviews clearly written by humans who read the submitted paper. As for paper I submitted, there's at least 1 review clearly written w 'assistance' of LLMs & a general lack of detail or comprehension in the reviews. Quite disheartening.
It's not necessarily about requests for massive amounts of information that will spur lawsuits. It's about teaching a journalistic skill to people who are not everyday journalists.
I'm not suggesting this replaces local reporting, but rather that 'non-professional journalists' can learn how to gather information about their local government including how it uses tech. This act can foster agency, civic engagement and hopefully better inform our community.
Yes of course. I have had to pay for records in the past. In some cases at this workshop, it will make sense for some people to request a public-interest fee waiver. This project isn't about massive document requests but about learning a new skill for getting additional info about local government.
Some of us believe in new/supplemental approaches. I see a path in teaching investigative journalism skills/public records requests/research to people curious/concerned about tech use by local government as a way to foster civic engagement. bsky.app/profile/kate...
2/2 So this is the next 'nail in the coffin' of local media - the media outlet itself pushing for me to ditch print entirely & 'go green.' I understand economics of it but reading online is not the same & I hate the idea of someday not having a print paper to read at least a couple times a week.
1/2 adjacent issue: I'm a journo who's subscribed to local papers & local public radio my whole adult life. My old-school 'local' paper (not alt weekly/indie & statewide coverage) The Oregonian has been paying to send mailers & for telemarketers to convince me to switch from print to digital only.
If you want to delete your data from Stripe and its ecommerce auto-fill sibling Link, it's actually pretty easy - support.stripe.com/how-to-delet...
or support.stripe.com/questions/i-...
Bran Knowles is one of many from various fields I've been speaking w for my @oregonstate.edu fellowship interrogating trust measurement. She writes that the AI regulatory apparatus is a "witness layer" for a trustworthy AI "narrative" that does not "meaningfully address harms indexed by distrust.”
Academic paper reviewers may have plugged paper excerpts into a genai model and prompted it to spit out some criticisms. Ultimately the humans have to choose reject, accept, etc. But some may have used ai to assist in writing their reviews.
would love for you to elaborate a bit