The Kids Online Safety Act (KOSA) would censor the internet and would make government officials the arbiters of what young people can see online. It will likely lead to age verification, handing more power, and private data, to third-party identity verification companies like Clear or ID.me. The government should not have the power to decide what topics are “safe” online for young people, and to force services to remove and block access to anything that might be considered unsafe for children. This isn’t safety—it’s censorship.

  • Calcharger@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You are definitely not a lawyer

    Correct, but there’s no need to be rude.

    Let’s take a look at what Ari Cohn is arguing:

    “Platforms will still have to age-verify users, violating their First Amendment right to speak and access content anonymously,”
    “The revisions made to KOSA just trade an explicit mandate for a vague one. Uncertainty about when knowledge of a user’s age will be implied leads to the same result as before: the only way a platform can be confident it is in compliance is by age-verifying every user. At best, language purporting not to require such verification ignores this practical reality. At worst, it is a deliberate obfuscation of the bill’s intended effect.”

    Yeah, that was part of what I originally wrote and then had to delete. In retrospect I should have just split it and made replies. Oh well.
    The bill mentions:

    SEC. 9. AGE VERIFICATION STUDY AND REPORT.
    (a) Study.—The Director of the National Institute of Standards and Technology, in coordination with the Federal Communications Commission, Federal Trade Commission, and the Secretary of Commerce, shall conduct a study evaluating the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.
    (b) Contents.—Such study shall consider —
    (1) the benefits of creating a device or operating system level age verification system;
    (2) what information may need to be collected to create this type of age verification system;
    (3) the accuracy of such systems and their impact or steps to improve accessibility, including for individuals with disabilities;
    (4) how such a system or systems could verify age while mitigating risks to user privacy and data security and safeguarding minors’ personal data, emphasizing minimizing the amount of data collected and processed by covered platforms and age verification providers for such a system; and
    (5) the technical feasibility, including the need for potential hardware and software changes, including for devices currently in commerce and owned by consumers.
    © Report.—Not later than 1 year after the date of enactment of this Act, the agencies described in subsection (a) shall submit a report containing the results of the study conducted under such subsection to the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Energy and Commerce of the House of Representatives.

    So there isn’t necessarily a plan for Real ID out of the box, the study would have to be conducted to determine feasibility of what age verification method would be best. I understand the concerns about sharing your personal ID online. It could very well come to a conclusion that the algorithms already in place are plenty good enough to determine what age someone is likely, how my FYP on TikTok is filled with Millenial content just based on what content I liked. But sure, the possibility of having to register your personal ID with every social media company doesn’t sound too appetizing.

    Continued In Reply

    • morrowind@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think we’ll just have to wait and see how tech companies implement this and how it’s enforced. Even the study is, as the letter points out, just guidance and not enforceable and can be ignored. The bill itself contains very little beyond saying that it doesn’t explicitly enforce “age gating” and extra data collection to determine age.

      Also, as the letter itself points out

      To date, COPPA has had negligible effects on adults because services directed to children under 13 are unlikely to be used by anyone other than children due to their limited functionality, effectively mandated by COPPA. But extending COPPA’s framework to sites “directed to” older teens would significantly burden the speech of adults because the social media services and games that older teens use are largely the same ones used by adults.

      Would it be impossible to create separation between sites used by older teens and adults? A lot of it happens culturally anyway. I’m not as pessimistic as others are about this.