I mean this is because of a technical issue likely on Kbin's side. Which is not a shock.
Also I posted 2 threads to kbin communities recently, 1 got most of its activity from LW and the other got 4 favorites from different instances and no comments (and it did not federate to LW, though I don't think that was related to the temporary block). LW could be too big but kbin seems kind of dead for the communities that aren't constantly in the feed (likely because of the same people posting, in many cases). Though technical issues always could be part of it in one way or another.
except it doesn’t work well for the rest of lemmy/the fediverse.
many other instances seem to be getting hit by this, but they don’t have as many activities generated locally for this to become much of a problem. additionally, this is mostly affecting instances with high latency to the instance that is being flooded by kbin, as lemmy currently has an issue where activity throughput between instances with high latency can’t keep up with too many activities being sent. the impact of this is can be a bit less on smaller instances with smaller communities often not having as many subscribers on remote instances, although we’ve seen problems reported by some other admins as well. this includes e.g. kbin.earth, which i suspect to have been hit by responses from a lemmy instance, while the lemmy instance was actually only answering the requests sent from that kbin instance.
during the last peak, when we decided to pull the plug for now, kbin.social was sending us more than 20 activities per second for 7 hours straight. lemmy.world can easily handle this amount of activities, but the problem arises when this impacts our federation towards other (lemmy) instances, as e.g. votes will get relayed by the community (magazine) instance, which means, depending on the type of activity being sent, we might have to be sending out the same 20 requests per second to up to 4,000+ other fediverse instances that are subscribed/following the community this is happening in. trying to send 20 requests per second, which lemmy does not do in parallel, requires us to use at most 50ms per activity total sending time to avoid creating lag. when the instance is in australia, with 200ms+ latency, this is simply not possible.
ps: if you’re wondering how i’m seeing this post, you can search for a post url and comment urls on lemmy to make lemmy fetch them, even if they haven’t been directly submitted through normal federation processes. this requires a logged in user on lemmy’s end.
Well that explains why a couple threads I posted weren't gaining any traction. I hope the issue gets figured out soon, otherwise I'll have to use my Mbin alt for anything lemmy.world related.
Is there a way to leverage down votes to limit content, at the magazine or instance level? I think I know the answer, but it would be dandy to put in place
I said that already, mbin might be good but I didn't like the advertisement.
I'm not quite sure what your position is. I am by no means an mbin booster. In fact I find some of the people pushing mbin over kbin (in lieu addition too) jerks about it.
This whole thread has been about the similarities I noticed between comments people made about/to Ernest previously and the sort of comments we later learned led to the xz backdoor.
When I started to read breakdowns about the social engineering behind the xz backdoor I was like, "Waaaaitaminute, I've seen that sort of talk before." I found it notable to point out the similarity and maybe poke around at it.
People decided to use the thread (to my excessive chagrin) to talk shit about kbin and rehash the exact same pressures I was attempting to analyze.
When I started to read breakdowns about the social engineering behind the xz backdoor I was like, "Waaaaitaminute, I've seen that sort of talk before." I found it notable to point out the similarity and maybe poke around at it.
People decided to use the thread (to my excessive chagrin) to talk shit about kbin and rehash the exact same pressures I was attempting to analyze.
It's a shame, because I noticed similar patterns was looking forward to some good discussion about it here. Alas...
However, it's much more likely to be due to the common experience of solo devs whose projects blow up than it is about bad actors on kbin.
If you're so inclined, you can always check the profiles of those who were pushing for it and particularly those who were volunteering; the boehs.org link should supply some helpful red flags to look for. Ernest would be wise to check IP activity and even ask for IRL credentials of those he would consider giving any real level of access to. Beyond that, it's firmly in the realm of "mildly interesting."
I just need a little more time. There will likely be a technical break announced tomorrow or the day after tomorrow. Along with the migration to new servers, we will be introducing new moderation tools that I am currently working on and testing (I had it planned for a bit later in my roadmap). Then, I will address your reports and handle them very seriously. I try my best to delete sensitive content, but with the current workload and ongoing relocation, it takes a lot of time. I am being extra cautious now. The regulations are quite general, and I would like to refine them together with you and do everything properly. For now, please make use of the option to block the magazine/author.
Many of us are getting banned from other instances because there is a bug where kbin is sending way too much traffic per interaction. I know that was affecting federation (according to other instance admins) so It might have something to do with that. The content on kbin does seem to me like it's not in sync at all, but I haven't measured it.
All we can really do for now is hope for a fix and not interact with posts from other instances.
This post made it over to reddthat, so your posts are federating out.
Checking over on my kbin account as well, I can see content in /newest from multiple sources. /sub returns 404 but I think that’s just a caching bug (adding ?p=1 to the end of the URL lets me workaround it).
It took about a minute for my comment from reddthat to show up here, but it looks like it made it through ok, so inbound comments are working. (Note: replying to myself from my kbin account)
thanks. I actually was seeing something similar at the start of the weekend but it sorta reveresed on me where sub works and newest gives me the 404. I have been looking through my sub and I am seeing lemmy mags. Thanks for the feedback.
I just gave this a try and I think there's a potentially worrisome problem, it silently failed on a lot of community subscriptions. The ones that returned HTTP 500 errors were listed in the "fail" list that the importer script generated, but a whole bunch of others returned 404 errors and weren't listed in either the success or fail lists.
So I advise those running this to pay attention to the error log to avoid losing track of those communities rather than trusting the "fail" list.
I'll try to reproduce this and look into tightening the error handling. A 404 error should imply that the magazine is not available at the remote. Are those magazines available at the target instance? Agree that those should at least be added to the log--perhaps should add a third category for "Unavailable." Remember that it will also navigate you to the magazines list at the end for visual confirmation.
When you said community subscription, were you referring to something in particular, or just using this term generically to refer to magazines?
I haven't tried all of them, but the ones I did check were ones that had not had posts on them at their source instance for quite a while. A few random examples:
I had 43 failures and 111 successes, so visual inspection wouldn't really help. I kept copies of the error log and the script output in a text file to figure it out later.
I assume that this means these communities haven't had activity since fedia.io opened, and so fedia.io doesn't know they exist? I've always wondered how the first person to subscribe to a community on an instance is able to do that.
And yeah, I'm using "community" to refer to "magazine".
Hmm, this is a good finding. Just on a cursory review, I had a look at the magazines list on fedia, and it does list magazines with zero threads, comments, posts, or subscribers on them (on other instances other than kbin.social). So maybe you've discovered a problem with kbin.social's federation? I don't know too much about this issue, so this is just my initial reaction before looking into it further.
Addressing your issue, I have bumped the version number to 0.1.3 and made a change to the async method handling so that instances not available at the remote get added to the fail log correctly.
This doesn't explicitly address the fact that some instances are unfederated, but it will make the log results clean.
As for the federation issue, what I've initially found is that a user on an instance has to visit the remote instance for the home instance to be aware of this remote instance, and a user (could be a different user) has to subscribe to that instance for the posts to start federating. What is unclear is how a user on an instance visits a remote instance from the home instance, as this is implementation-specific and could vary from instanc to instance.
kbinMeta
Aktywne
Magazyn ze zdalnego serwera może być niekompletny. Zobacz więcej na oryginalnej instancji.