My main account is [email protected]. However, as of roughly 24-hours ago (it seems this has been going on since March 10th and gotten worse since) it seems like the server has stopped properly retrieving content from lemmy.world.

It’s been running smoothly for well over 9 months, and (I think) working fine for content coming in from other instances. So I’m curious if anyone else experienced anything strange with lemmy.world federation recently?

Setup Description

The server flow in my case is as follows:

[Public Internet] <-> [Digital Ocean Droplet] <-> [ZeroTier] <-> [Physical Machine in my Basement (HW Info)]

The Digital Ocean droplet is a virtual host machine that forwards requests via nginx to the physical machine where a second nginx server (running the standard lemmy nginx config) then forwards the request to the lemmy server software itself.

Current Status

Lemmy Internal Error

I’ve found this is my lemmy logs:

2024-03-24T00:42:10.062274Z  WARN lemmy_utils: error in spawn: Unknown: Request limit was reached during fetch
   0: lemmy_apub::objects::community::from_json
             at crates/apub/src/objects/community.rs:126
   1: lemmy_apub::fetcher::user_or_community::from_json
             at crates/apub/src/fetcher/user_or_community.rs:87
   2: lemmy_server::root_span_builder::HTTP request
           with http.method=POST http.scheme="http" http.host=social.packetloss.gg http.target=/inbox otel.kind="server" request_id=688ad030-f892-4925-9ce9-fc4f3070a967
             at src/root_span_builder.rs:16

I’m thinking this could be the cause … though I’m not sure how to raise the limit (it seems to be hard coded). I opened an issue with the Lemmy devs but I’ve since closed it while gathering more information/making sure this is truly an issue with the Lemmy server software.

Nginx 408 and 499s

I’m seeing the digital ocean nginx server reporting 499 on various “/inbox” route requests and I’m seeing the nginx running on the physical machine that talks directly to lemmy reporting 408 on various “/inbox” route requests.

There are some examples in this comment: https://lemmy.world/comment/8728858

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 months ago

    Checking lemmy.world federation status at: https://phiresky.github.io/lemmy-federation-state/site?domain=lemmy.world . There, you can see that social.packetloss.gg is listed among the “407 failing instances”.

    You’ll need to check if your server actually configured to receive federation traffics. If you’re using cloudflare or some other web application firewall, make sure they’re not doing any anti bots measures on the /inbox endpoint. For example, in Cloudflare, create a new WAF rule (Security -> WAF) for /inbox and set it to skip all security.

    If you don’t use any web application firewall at all, did you just upgraded your instance from v18.x to v19.x recently right before experiencing federation issue? v19.x has increased resource consumption and will have problem running on small server after running for a while. For small VPS (~4GB of RAM), you might want to adjust database pool_size to <30 on lemmy.hjson file. Restarting lemmy AND postgres every once in a while also helps if you’re on a small VPS.

    • Dark Arc@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      It’s been up on 19.x for a few months now. It’s also a full on bare metal server with a ton of resources, it’s not at all starved.

      It’s almost like someone posted something to somewhere that “jammed” Lemmy and it just won’t get past it but I’m not sure how to figure out what that would be or how to unjam things.

      • redcalcium@lemmy.institute
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        Could be postgres-related. Federation is only “jammed” if the source instance thinks your instance is having an issue because it takes too long to respond. Maybe enabling slow query log on postgres and then reviewing that log could point you in the right direction.

    • TimLovesTech (AuDHD)(he/him)@badatbeing.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      5 months ago

      Using that site and doing my instance to Lemmyworld , your site is listed under the working ones. I know spam has been on the rise again on big instances, so that could be playing a part in this also.

  • Demigodrick@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    5 months ago

    Here’s a handy site to check your federation status: https://phiresky.github.io/lemmy-federation-state/site?domain=social.packetloss.gg

    It looks like all the sites are lagging, which can sometimes mean there is an issue. What’s the specs of the server? Can you try restarting it and seeing if that helps?

    There are issues for servers hosted in places like Australia because of the latency in communicating with .world due to how big it is and how much data it sends because of this.

    • Dark Arc@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      So, I think this is a (helpful) general comment but wrong in this/my specific case.

      The server is so small it’s not really going to register on a 10-minute frequency for outgoing content – I’m not that much of a lemmy addict! haha.

      You can see in a comment here my most recent comment to lemmy.world did sync: https://lemmy.world/comment/8728858

      I’m not having any issues with outgoing content, beehaw, the KDE instance, and several others. It’s just lemmy.world that’s acting up (which is unfortunately because it’s my favorite – I mod/run several communities and donate to here/them – haha).

  • mesamune@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    I have the same issue. I have a very small instance and lemmy.world seems to not work no matter what I do. I can get lemmy ml no issues and done if the others but for done reason world just won’t work with and communities, like everything is blacklisted or something.

    • Roman0@lemmy.shtuf.eu
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      If you’re running it in docker you can just check the logs, I do it like this: docker compose logs -f lemmy, and see if you have requests from any instance in the log stream. For me it goes pretty fast, but you can always ctrl+c to exit and scroll up to see what you’ve missed. Might not be the most optimal way, but it works for me.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    HTTP Hypertext Transfer Protocol, the Web
    IP Internet Protocol
    VPS Virtual Private Server (opposed to shared hosting)
    nginx Popular HTTP server

    [Thread #624 for this sub, first seen 23rd Mar 2024, 17:55] [FAQ] [Full list] [Contact] [Source code]