I've fucked around a bit with ChatGPT and while, yeah, it frequently says wrong or weird stuff, it's usually fairly subtle shit, like crap I actually had to look up to verify it was wrong.

Now I'm seeing Google telling people to put glue on pizza. That a bit bigger than getting the name of George Washington's wife wrong or Jay Leno's birthday off by 3 years. Some of these answers seem almost cartoonish in their wrongness I almost half suspect some engineer at Google is fucking with it to prove a point.

Is it just that Googles AI sucks? I've seen other people say that AI is now getting info from other AIs and it's leading to responses getting increasingly bizarre and wrong so... idk.

  • FunkyStuff [he/him]
    ·
    3 months ago

    Google is pivoting into AI hard so I doubt their model is cheap at all. Unless they're running a much smaller version for Google search compared to bespoke Gemini conversations.

    • BeamBrain [he/him]
      ·
      3 months ago

      Never underestimate the ability of capitalists to cut corners.

      • D61 [any]
        ·
        3 months ago

        Cutting so many corners off this thing we're just running in circles now...

        border-arc-quad-2 border-arc-quad-1

        border-arc-quad-3 border-arc-quad-4

    • JohnBrownsBussy2 [he/him]
      ·
      3 months ago

      I wouldn't be surprised if they're using a variant of their 2B or 7B Gemma models, as opposed to the monster Gemini.

      • QuillcrestFalconer [he/him]
        ·
        3 months ago

        Almost surely. If they're generating an overview at search time they need a very fast model. You can use a cache for the most common searches but otherwise you have to do inference at search time, so you need a fast/small model