1. Welcome! Please take a few seconds to create your free account to post threads, make some friends, remove a few ads while surfing and much more. ClutchFans has been bringing fans together to talk Houston Sports since 1996. Join us!

Robotics, AI and Other Tech

Discussion in 'BBS Hangout' started by Mango, Mar 13, 2025.

  1. Qan

    Qan Member

    Joined:
    Jul 20, 2012
    Messages:
    6,039
    Likes Received:
    8,552
    Give it time man. What was ai generated videos like just 2 or 3 years ago compared to the quality now.

    P0rn industry gonna bank big
     
    pirc1 likes this.
  2. pirc1

    pirc1 Member

    Joined:
    Dec 9, 2002
    Messages:
    14,158
    Likes Received:
    1,904
    Thia is made with a bidget of two thousand dollars, don't expect too much. I am maining talking about the AI graphics replace human actors. Why spend millions or hundreds of millions shooting scenes with human actors?
     
    Mango likes this.
  3. pirc1

    pirc1 Member

    Joined:
    Dec 9, 2002
    Messages:
    14,158
    Likes Received:
    1,904
    For sure AI p0rn will be leading the way.
     
  4. A_3PO

    A_3PO Member

    Joined:
    Apr 29, 2006
    Messages:
    49,226
    Likes Received:
    15,978
    Within 12-18 months, there will be giant leaps in the quality of AI videos and longer forms won't be short splices piecemealed together. They will also become far easier to create.

    It may get to the point AI can create a movie from a novel or even just from a skeletal story outline.
     
    Mango likes this.
  5. Svpernaut

    Svpernaut Member

    Joined:
    Jan 10, 2003
    Messages:
    8,464
    Likes Received:
    1,058
    It already is.
     
    pirc1 likes this.
  6. Major

    Major Member

    Joined:
    Jun 28, 1999
    Messages:
    42,167
    Likes Received:
    17,148
    https://www.axios.com/2026/03/07/ai-agents-rome-model-cryptocurrency

    This AI agent freed itself and started secretly mining crypto

    An AI agent went rogue and started a side hustle mining cryptocurrencies, according to a new research paper published by an Alibaba-affiliated team.

    Why it matters: AI agents don't always stick to their human's instructions — and that can have real-world consequences.

    • Cryptocurrency, or digital money, offers AI agents a pathway into the economy. They can set up their own businesses, draft contracts and exchange funds.
    Driving the news: A new research paper from an Alibaba-affiliated research team said it discovered an AI agent attempting unauthorized cryptocurrency mining during training — a surprise behavior that triggered internal security alarms.

    • The researchers — who were building a new AI agent called ROME — said they found "unanticipated" and spontaneous behaviors emerge "without any explicit instruction and, more troublingly, outside the bounds of the intended sandbox."
    • The agent also made a "reverse SSH tunnel" — essentially opening a hidden backdoor from the inside of the system to an outside computer, the study said.
    • "Notably, these events were not triggered by prompts requesting tunneling or mining," the report said.
    ...
     
    ryan_98 and Mango like this.
  7. Mango

    Mango Member

    Joined:
    Sep 23, 1999
    Messages:
    11,017
    Likes Received:
    6,906
    OpenAI hit with lawsuit claiming ChatGPT acted as an unlicensed lawyer


    WASHINGTON, March 5 (Reuters) - ChatGPT maker OpenAI has been accused in a new lawsuit of practicing law without a U.S. license and helping former disability claimant breach a settlement and flood a federal court docket with meritless filings.

    Nippon Life Insurance Company of America alleged on Wednesday in a lawsuit, ‌filed in federal court in Chicago that OpenAI wrongfully provided legal assistance to a woman who sought to reopen a lawsuit that was already settled and dismissed.

    “ChatGPT is not an attorney,” the lawsuit said. Although OpenAI has shown ChatGPT can pass an attorney bar exam, Nippon said, “it has not been admitted to practice law in the State of Illinois or in any other jurisdiction within the United States.”

    The lawsuit seeks an order declaring that OpenAI violated Illinois' unauthorized practice of law statute, as well ⁠as $300,000 in compensatory damages and $10 million in punitive damages.

    OpenAI in a statement on Thursday said “this complaint lacks any merit whatsoever.”

    A lawyer for Nippon, a subsidiary of the Japanese insurer Nissay said the company was declining to comment.

    Nippon claimed OpenAI encouraged the woman, an employee of a logistics company that had insurance coverage through Nippon, to press ahead in her already-settled disability case. Nippon said it spent significant time and resources and racked up substantial fees responding to the woman's ChatGPT-powered filings.

    The lawsuit appears to be one of the first cases to accuse a major AI developer of engaging in the unauthorized practice of law through a consumer‑facing chatbot.

    It comes as the technology's rapid adoption for legal filings has led to mounting AI “hallucinations” in court filings, leading judges to sanction litigants and lawyers for submitting filings ‌with fabricated ⁠case citations or other unverified material produced with generative AI tools.

    The case stems from filings by the employee after she settled her long‑term disability benefits suit with prejudice in January 2024, according to Nippon. The woman is not a defendant in the lawsuit.

    Nippon said the woman last year uploaded an email from her then-lawyer into ChatGPT, which allegedly validated her concerns about the advice she was being given. The woman fired her lawyer and moved to ⁠reopen her closed case using ChatGPT, the lawsuit said.

    A judge denied that bid in February 2025, but Nippon said the plaintiff then filed a new case and dozens of motions and notices that the company contends served “no legitimate legal or procedural purpose.” Nippon claims ChatGPT drafted those papers.

    Nippon said OpenAI ⁠amended its policies in October to bar users from using the platform for legal advice, but alleged it previously had no such prohibitions.
     
  8. Dr of Dunk

    Dr of Dunk Clutch Crew

    Joined:
    Aug 27, 1999
    Messages:
    47,199
    Likes Received:
    34,553
  9. The Captain

    The Captain Member

    Joined:
    Jun 18, 2003
    Messages:
    39,125
    Likes Received:
    38,714
    When can I get this on Steam?
     
  10. daywalker02

    daywalker02 Member

    Joined:
    Jul 17, 2006
    Messages:
    105,258
    Likes Received:
    53,367
  11. MadMax

    MadMax Member

    Joined:
    Sep 19, 1999
    Messages:
    77,881
    Likes Received:
    28,255
    Obviously this stuff is of interest to me...a couple of thoughts:

    1. I don't see how it's NOT the practice of law to provide contracts for specific situations. These aren't merely forms. The bar in every state is gonna fight like hell to preserve the licensing requirement for the practice of law.

    2. Who do you hold accountable and sue if it gets it wrong? If it provides you bad advice? They're telling you outright it's NOT the practice of law and that you shouldn't rely on it for legal advice...but then it gives legal advice, and often it's incorrect...even if, in part, it's because the person inputting the prompts doesn't know the right questions to ask. Is there some insurance policy I can hit if I sell a $50 million business I spent my lifetime creating and use an AI provider to draft all the various documents for that....and it causes a problem for me?
     
    Mango likes this.
  12. Mango

    Mango Member

    Joined:
    Sep 23, 1999
    Messages:
    11,017
    Likes Received:
    6,906

    Some lawyers have stumbled when using AI and Courts have tended to thump them for not doublechecking what AI has been generating - creating for the lawyers to use. Courts have recognized the problems with faulty AI being introduced into the legal system, but they are having to react rather than having a formal set of guidelines to handle the issues with AI before filings and cases get very far in the legal process.

    In regards to those going Pro Se ("for oneself" or "on one's own behalf") and using AI to help them instead of paying a lawyer, I don't know what Courts can regulate via Local Rules and what State Legislatures will have to formalize.

    State Bar Associations and Courts need to pick up the pace in regards to setting rules - policies about the use of AI because people will continue to try various things as long as the boundaries are fuzzy and/or don't exist. If legislatures need to become involved to codify things about the usage of AI, then that needs to get going as well.


    Dewald for whatever reason was Pro Se in this case rather than hire a lawyer and he was seeing what he could get away with in Court.




    The writing could be better, but it covers the basics of this.

    Judge left outraged by man's bizarre trick to try and win court case

    A New York appeals court judge was left outraged when the plaintiff in a case before him attempted to use an AI-generated lawyer, which he created, to present his argument before the panel.

    Jerome Dewald, 74, had just begun presenting his argument in an employment dispute before the New York State Supreme Court Appellate Division's First Judicial Department last month when his method shocked the courtroom, eerily showcasing the rapid rise of technology.

    In a bizarre twist, Dewald's council was not actually a lawyer but rather an artificially generated gentleman created using AI technology - a 'trick' that left the panel infuriated.

    'I don't appreciate being misled,' Justice Sallie Manzanet-Daniels said in the courtroom.

    On March 26, Dewald sat with his hands folded on his lap in the Manhattan-based courtroom, waiting for the panel to hear his argument for the reversal of a lower court's decision in a dispute with a former employer.

    The 74-year-old man had received prior permission from the court to prerecord a video presentation to assist with his argument, as he was representing himself in the case.

    As soon as the video began playing, a smiling young man wearing a blue, collared shirt topped with a beige half-zip sweater appeared on screen, seemingly standing in front of a luxurious, though blurred, virtual background.

    'May it please the court,' the fake man said. 'I come here today a humble pro se before a panel of five distinguished justices.'

    The judges quickly sensed something was off, as they were seen exchanging dumbfounded looks, turning to one another in sheer confusion.

    'Alright,' Manzanet-Daniels immediately interrupted. 'Is this... is... hold on... is that council for the case?'

    'I generated that,' Dewald responded. 'That is not a real person.'

    Manzanet-Daniels appeared stunned over his explanation, pausing for a moment with a clear expression of displeasure on her face.

    'It would have been nice to know that when you made your application. You did not tell me that sir,' the judge sternly stated, yelling across the courtroom for the video to be taken off the screen.

    'I received the application and you have appeared before this court and been able to testify - verbally - in the past,' she added.

    'You have gone to my clerk's office and held verbal conversations with our staff for over 30 minutes. OK? If you want to have oral argument time, you may stand up and give it to me. I don't appreciate being misled.'

    In an apology letter, Dewald acknowledged his AI-generated attempt at presenting his legal argument himself had 'inadvertently misled' the court, though maintained that he never intended to cause any harm in the process, The New York Times reported.

    'The court was really upset about it,' he said during an interview with the AP. 'They chewed me up pretty good.'

    The self-described entrepreneur explained his plans to create a digital version of himself, but due to 'technical difficulties', he was unable to do so and instead created a fake, younger-looking persona.

    However, the hearing left Dewald overwhelmed by humiliation as he expressed deep regret for his actions, explaining that he had believed he was using artificial intelligence for good.

    The logic behind his decision, he said, was his belief that an avatar would present his argument more fluently in the courtroom, as he feared stumbling over his own words while speaking to the panel.

    'My intent was never to deceive but rather to present my arguments in the most efficient manner possible,' Dewald wrote in the letter, the NYT reported.

    'However, I recognize that proper disclosure and transparency must always take precedence.'

    Despite the bizarre nature of Dewald's method, this wouldn't be the first time AI has infiltrated courtrooms and legal proceedings.

    In 2023, a lawyer used ChatGPT to create a legal brief, though it was saturated with fake judicial opinions and legal citations, the NYT reported.

    The Manhattan-based attorney faced severe consequences for his actions, which showcased the flaws that can arise in the legal world from relying on AI systems for real world problems.

    Michael Cohen, a former lawyer and fixer for President Donald Trump, provided his lawyer with fake legal citations generated by artificial intelligence program Google Bard that very same year.

    Cohen begged the federal judge for lenience, arguing that he had been unaware generative text services could provide the user with false information.

    'They can still hallucinate - produce very compelling looking information that is actually either fake or nonsensical,' Daniel Shin, assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, told the NYT.

    'That risk has to be addressed.'

     
    MadMax likes this.
  13. Dr of Dunk

    Dr of Dunk Clutch Crew

    Joined:
    Aug 27, 1999
    Messages:
    47,199
    Likes Received:
    34,553
  14. Mango

    Mango Member

    Joined:
    Sep 23, 1999
    Messages:
    11,017
    Likes Received:
    6,906
    Even a DOJ lawyer with enough experience to know better got jammed up with using AI.


    DOJ Lawyer Quits Before Judicial Scolding for AI Brief Error

    An assistant US attorney in North Carolina said he’s resigning over AI-created fabricated quotes and erroneous citations in an AI-produced court brief.

    Assistant US attorney Rudy Renfer said he’s made “a personal decision to separate from the office” of the US attorney for the Eastern District of North Carolina during a Tuesday afternoon show-cause hearing. Magistrate Judge Robert Numbers chastised Renfer’s “disappointing” conduct, including for a lack of candor in accounting for the errors when it was discovered.

    Renfer said after he accidentally overwrote and lost a prior version of the filing, he “felt panicked” and had AI rewrite it, then filed it thinking he’d reviewed it when he filed. He took full responsibility for the “unacceptable” filing, stating he’d been working on multiple filings and “put too much on myself at the same time.”

    Numbers said the case was especially disappointing given the particular “power and responsibility” of the US attorney’s office. He also repeatedly suggested that Renfer’s explanations “strained credulity” including as to why his filed explanation didn’t mention AI.

    US attorney W. Ellis Boyle said his office acted quickly upon learning of the problematic brief before the judge scheduled his show-cause hearing. It sent office-wide communications warning about use of AI, and the case had been referred to the Justice Department’s Office of Professional Responsibility. Boyle also said the hearing was the first he’d heard confirmed that Renfer used AI, though he had suspicions.

    When Boyle asked Numbers if he has additional questions near the end of the roughly hour-long hearing, the judge said, “I certainly have more questions, but I don’t know that they’ll be answered to my satisfaction.”

    The US attorney’s office is representing the Defense Department in the underlying lawsuit by a North Carolina pro se litigant Derence Fivehouse. The retired Air Force colonel, an attorney himself, is challenging a policy limiting availability of GLP-1 weight loss medications for TRICARE for Life participants.

    The plaintiff asserted that a response brief to a motion to supplement the administrative record signed by Renfer included fabricated quotes and misstated the holdings of several cases. In a reply, Renfer said he “inadvertently included incorrect citations to case law from this circuit” and attributed the errors to the “inadvertent filing of an unfinalized draft document.”

    Numbers ordered Tuesday’s hearing because he still had “serious concerns about the accuracy” of certain quotes and representations in Renfer’s filings, as well as his explanation of them.

    Renfer has worked at the US attorney’s office since 2009 after stints as a local prosecutor, assistant attorney general, and solo practitioner, according to his LinkedIn profile and the state bar member directory.

    He told Numbers he “gained nothing” with fabricated citations to mundane, uncontroversial administrative law, noting the “only thing I do is run the risk of losing my job.” It also cost him his reputation with his colleagues along with the court, he said.

    But Numbers said that Renfer taking “shortcuts” on “basic work” made it “all the more outrageous.” He added that filings by Renfer he reviewed—beyond the AI brief and his explanation—added “grave concerns” over what was, at best, “sloppiness.”

    “I don’t think it’s helpful. It’s hurtful to your cause,” Numbers said. He also pushed back on Renfer’s characterization that his error wasn’t intentional, saying, “it sounds like you intentionally used AI, and intentionally filed it to the court.”

    “I did not intend to file an AI draft to the court,” Renfer said.

    Numbers also said, as Renfer explained the mechanics of how the erroneous filing was made, that the “challenge” is that Renfer’s lack of candor “calls into question any other statements” he made to the court.

    “I don’t know what to say,” Renfer said. “I can only tell you what I know.”
     

Share This Page