So you agree that the internet safety official in Australia should be able to censor what we discuss here on this forum?
Yes. Socialist is just a label kind of like National Socialist Workers Party, weren't socialists, but fascists. I'm glad you finally get it
I don't think we should be able to post beheadings without warning or spoilers hell . .. I PERSONALLY think spoiler warning for movies are a must but nothing to put someone in jail able SO You believe in the Anything Goes Internet? Beheadings. . .regardless of the families knowing or involved Revenge p*rn Blackmail Anything goes? Rocket River
As you should. If digital watermarking or labeling can be defeated, then there isn’t much value in it, nor should you trust it. I see no reason why watermarking cannot include embedded digital authentication, thus preventing its creation by unauthorized sources or its removal/manipulation without detection. Reputation and trust, as it is today for those that sign their software.
The video in question was disturbing but no blood was seen. Did you watch it? Generally speaking, I prefer that government stay out of censorship. Internet sites can censor themselves. I agree with you that warnings should be placed on some videos and pictures as a courtesy. Do you believe that the Australian government official should have the power to force X to remove the video?
When I read meta's article, all i can think was "when are they going to use that miraculous technology against their child p*rn problem", but nope...no mention and the proposed tech in the paper is still in the works. Nothing is as "relatively easy to solve" as it seems especially with these matters. These learning models are designed specifically to simulate realism and are given reinforcement to that abstract objective. If you have the will and compute, the rules and weights can easily change. Though if you decide to mint nfts with said hand crafted artisanal AI "watermarks", and slap on blockchain along with your last line as the motto, you could probably make 6-7 figures in a few months, no joke.
Apparently there are sites that can create deepfake nudes of people and also generate fake images of them in sex acts using AI. Teenagers standing them to create nudes of female classmates and in also used to extort and harass teenagers. I’m on my phone so hard to post a lot of text from the article but here is the link. https://www.nytimes.com/2024/04/22/technology/deepfake-ai-nudes-high-school-laws.html Spurred by Teen Girls, States Move to Ban Deepfake Nudes Legislators in two dozen states are working on bills, or have passed laws, to combat A.I.-generated sexually explicit images of minors. States are on the front lines of a rapidly spreading new form of peer sexual exploitation and harassment in schools. Boys across the United States have used widely available “nudification” apps to surreptitiously concoct sexually explicit images of their female classmates and then circulated the simulated nudes via group chats on apps like Snapchat and Instagram.
I think private companies can and will censor their own platforms. I think it's better for the people that government not be involved in censorship. How about you?
I think we are talking about two different things. You are talking about detecting AI-generated content in an ad-hoc way, which is possible but difficult and not reliable (although it can be a potential method to detect some % of AI-generated content). On the other hand, I'm talking about AI-generated content being labeled through established best practices, requirements, or standards. For example, if you use the Google AI tool to generate content, it will be embedded with an invisible indicator. If you use Meta AI, it will label it with both a visual indicator and invisible metadata. Any AI tool can go this route, and I'm quite sure the industry will come up with an agreed-upon standard. Of course, not everyone will follow the standard, but if you don't, you are at risk of being left out or not being trusted. The META link is about their own practice and the potential industry standard. It's not about ad-hoc detection. META said that they will label images when they can detect "industry standard indicators that are AI-generated". When photorealistic images are created using our Meta AI feature, we do several things to make sure people know AI is involved, including putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files. Using both invisible watermarking and metadata in this way improves both the robustness of these invisible markers and helps other platforms identify them. This is an important part of the responsible approach we’re taking to building generative AI features. Since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI). The invisible markers we use for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices.
You severely overestimate the morality and ethics of a privately owned entity Corporations have the psychology of psychopaths These are the same entities that know a defect will kill millions but the cost benefit analysis says it's an exceptable risk You may hate governmental entities but to think Corporations and privately owned companies will be better is just naive Rocket River
Can you provide any examples of how private companies are unwilling to censor and how significant harm is being inflicted upon the public? Or are you ready to create new laws, regulations, and hire more government employees to censor speech?
Cute. Why would have examples when the government is doing it for them? I do know that there have been instances of companies keeping secrets that hurt the public . . . I forget the car company that had a component that was defective but they mathed it out that the cost of laws suits and accidents was less than the cost of the recall Rocket River
Looks like you're now back to square one of RRs concern that you quoted...how to distinguish real images from deliberate fakes. Meta and Google's answer is to openly declare they're fakes or promote a top-down fake standard, Fakes that want to be hidden without watermarks can stay hidden, and your relatively easy solution will need to engineer a way to "trust" real images. Real images Fakes without watermarks made locally Fakes with watermarks made by a service
That's an entirely different animal. Nobody is advocating for companies to hide knowledge that their products or services are harming people. I don't believe there is a need for government to begin censoring speech as they clearly did prior to the 20' election when they pressured media outlets to conceal the fact that Hunter's laptop was NOT, in fact, "Russian disinformation".
How do we distinguish between "safe" software and "unsafe" software? The industry came up with a solution. And sure, there are those who decided not to sign their software. Some of them are still okay because we know them by reputation, and so we are willing to take a higher risk with them, but other than those rare exceptions, we do not install software that is not signed. I'm saying that coming up with a similar (not exact, of course, because we are talking about "information" not executable) solution is feasible (tools are already available from some and are already under discussion by industry standard AI groups). There are still other challenges to be worked through (what about positively ID images are actually real, as you stated - see below), but this isn't going to be the wild wild west of no indicators, and you are on your own. It's closer to what we already have today. I also pointed out a more serious issue (isolation by choice through AI self-selection). Industry can also come up with some agreement and method on real images - "this is verified real".