A Proctorio vulnerability was published by the National Vulnerability Database. nvd.nist.gov/vuln/detail/...
A Proctorio vulnerability was published by the National Vulnerability Database. nvd.nist.gov/vuln/detail/...
I sent "I am requesting access to my personal information under the Personal Information Protection Act of British Columbia, Canada. Please provide a copy of all personal information your organization holds about me and a record of how it has been used or disclosed."
The BCcampus-funded Zero Textbook Cost project at BCIT just published its first video! We interviewed students about textbook costs, and what they think about Zero Textbook Cost.
mediaspace.bcit.ca/media/ZTC+St...
The original court document didn't have OCR, so I've made it more accessible.
Some words you can search for are "sycophancy", "psychological", "anthropomorphic", "negligent", and "duty of care".
There was an issue with the link. Fixed.
Here is the Notice of Civil Claim: drive.google.com/file/d/1iNun...
www.cbc.ca/news/canada/...
Did you see them open for Mars Volta in 2003? I was too late!
OpenAIβs push to become crucial infrastructure in education should not and cannot be separated from its broader entanglements with the US military and mass surveillance that includes students and teachers.
You can write Algorithmic Impact Assessments too. The Government of Canada has a tool for that.
Link: open.canada.ca/aia-eia-js/?...
Do this for your institution and get it seen. Calculate impact levels, assess mitigation measures, and present it as risk. It is possible to stop using e-proctoring.
In 2025, I helped write BCIT's first Algorithmic Impact Assessment. Here's a link: drive.google.com/file/d/1S1i_...
With AIAs now required for ed tech tools that make automated decisions, I am working to stop the use of AI surveillance software here. It's always about protecting students from harm.
If ChatGPT writes a word of this apology, it is not a real apology. www.cbc.ca/news/canada/...
βI made the very specific decision not to ask about the content of the chats with Mr. Altman. I donβt want to play any role in interfering with the criminal investigation thatβs under wayβ.
I wish Eby would ask for the content ChatGPT generated.
Gift link: www.theglobeandmail.com/gift/4e12d7e...
Let's nationalize AI (and shut it down).
Gift link: www.theglobeandmail.com/gift/4e12d7e...
CanLII vs. Caseway AI has been confidentially settled: www.canadianlawyermag.com/resources/le...
This was an interesting case because CanLII is a non-profit largely providing access to public records. You can check out their case in the Notice of Civil Claim: drive.google.com/file/d/1MqlY...
We are really going to regret the technology we have built.
I'll be paying attention to what the Province of British Columbia has to say once it's conducted an investigation of OpenAI's role in the tragedy.
It just keeps getting worse. What was ChatGPT generating for this person?
I sort of wonder if how it works is every eye is assigned one other to spy on? Maybe the US will force their eye to use Claude.
You just reminded me of Five Eyes, which seems relevant. en.wikipedia.org/wiki/Five_Eyes
It is a really sad state of affairs that Anthropic is the "ethical" GenAI company because they won't allow their technology to be used for "mass domestic surveillance" (only mass international surveillance) or "fully autonomous weapons" (only partially automated).
www.anthropic.com/news/stateme...
In grappling with OpenAI/Tumbler Ridge links, I wanted to write a post this week about how ChatGPT and other genAI tools are not really EdTech. But it's not quite that straightforward.
The good news is that the Canadian Privacy Library has been updated with Coast Mountain College and NVIT's Privacy Impact Assessments from 2024-2026.
The bad news is that these institutions have completed zero Privacy Impact Assessments in the past two years.
Itβs unclear if OpenAI have provided the chatlogs. The biggest part of this may be how ChatGPT responded. www.cbc.ca/news/politic...
Turnitin put out a press release today claiming that 14.8% of essays are flagged by its AI detector, a 348.5% increase from 2023.
What this really means is that there are more false positives than ever. More harm.
"A representative of tech giant OpenAI met with the B.C. government one day after an 18-year-old killed six people in a school shooting in Tumbler Ridge, but the company did not disclose that it had suspended the shooterβs ChatGPT account months earlier..."
www.theglobeandmail.com/gift/4e12d7e...
I knew something was wrong when it said it was making a "fresh inbox" for me.
Thread full of people affected by this. www.reddit.com/r/yahoo/comm...