In recent years, deep synthesis technology has developed rapidly. While serving user needs and improving user experience, it has also been used by some unscrupulous people to produce, copy, publish, and disseminate illegal and harmful information, to slander and belittle others' reputation and honor, and to counterfeit others' identities. Committing fraud, etc., affects the order of communication and social order, damages the legitimate rights and interests of the people, and endangers national security and social stability.

The introduction of the "Regulations" is a need to prevent and resolve security risks, and it is also a need to promote the healthy development of in-depth synthetic services and improve the level of supervision capabilities.

Providers of deep synthesis services shall add signs that do not affect the use of information content generated or edited using their services. Services that provide functions such as intelligent dialogue, synthesized human voice, human face generation, and immersive realistic scenes that generate or significantly change information content, shall be marked prominently to avoid public confusion or misidentification.

It is required that no organization or individual shall use technical means to delete, tamper with, or conceal relevant marks.

  • TankieTanuki [he/him]
    ·
    2 years ago

    Smart move. Transparency is good. I wonder how they'll try to spin this as China hiding something.

  • solaranus
    ·
    edit-2
    1 year ago

    deleted by creator

  • Upanotherday [he/him]
    ·
    2 years ago

    Leaders of the future .

    :china-stars:

    Seriously it is so obvious this is going to be necessary if it already isn't.

    • Ligma_Male [comrade/them]
      ·
      2 years ago

      trouble is once the fakes are good enough you couldn't enforce it.

      false-false-positives are gonna be a trip

      • silent_water [she/her]
        ·
        2 years ago

        the publicly available stuff is good enough that it gets past normies. there are almost certainly private systems that can fool even trained observers. those systems will be public inside of 5 years.

        • Ligma_Male [comrade/them]
          ·
          2 years ago

          hey at least that means you can say any embarrassing image of yourself is faked and people should believe you unless there's corroborating evidence.

          gonna be fun when it's a real picture of a politician doing something shady.

  • BrezhnevsEyebrows [he/him]
    ·
    edit-2
    2 years ago

    An unmarked AI-generated image of China's flag, which will be illegal in China after January 10, 2023.

    :xi:

    • blue_lives_murder [they/them]
      ·
      2 years ago

      that flag isn't even approaching 'immersive' or 'realistic' so this seems as yellow-peril-baiting as the 'DAE le evil see-see-pee, look at my poo' comments on the article

  • ssjmarx [he/him]
    ·
    2 years ago

    aww hell yes

    intelligent dialogue, synthesized human voice, human face generation, and immersive realistic scenes that generate or significantly change information content

    This seems to intentionally cut out AI generation of obviously-fake scenes, like that one image model that turns you into an anime character, from the regulation. I guess they're focused specifically on images and text that could be confused for actual photographs or people, but it will be interesting to see if this is where they leave it or if another regulation comes down for AI-generated images that aren't trying to mimic reality.

      • ssjmarx [he/him]
        ·
        2 years ago

        I mean fair enough, but sooner or later some government is going to have to come up with an answer to the copyright question when a model generates an image that's really similar to an extant work.

        • chickentendrils [any, comrade/them]
          ·
          2 years ago

          Yeah, that is the question that Western governments will probably tackle. Shows how utterly unconcerned with copyright I am, it's just not a thing I've ever considered legitimate lol, that I didn't even think of it.

    • ComradeLove [he/him, comrade/them]
      ·
      edit-2
      2 years ago

      Now I'm getting all meta and imagining that some ai put out this "official proclamation" as a deep fake.

      "AI, please write an article from Ars Technica entitled "China bans AI-generated media without watermarks""

      • ssjmarx [he/him]
        ·
        2 years ago
        Fun fact!

        You can't do that in Western countries either!

  • UlyssesT [he/him]
    ·
    2 years ago

    Now this is a good idea and would address most of my problems with the way this technology is being applied right now.

  • berrytopylus [she/her,they/them]
    ·
    2 years ago

    People here are rightfully questioning how they would enforce against private individuals on the internet, but public officials like celebrities or corporations are going to be following this law.

    • invalidusernamelol [he/him]
      ·
      2 years ago

      Yeah, this is probably most useful for preventing companies from laying off design teams and just using AI art for everything because the watermark will make it look cheap.

      No one cares what individuals do