One foot planted in “Yeehaw!” the other in “yuppie”.

  • 4 Posts
  • 73 Comments
Joined 6M ago
cake
Cake day: Jun 11, 2023

help-circle
rss

I dunno, my OLED panel has some notable image retention issues - and a screensaver does appear to help in that regard.


Eh, I went back to screen savers due to my use of OLED panels. Better than a static lock-screen image for sure.


This article is ancient. We have more recent elections to go off of.

And according to basically everything I can find, “Moms For Liberty” and related groups suffered major losses basically everywhere the last cycle.

I’m not at all suggesting to not worry, after all, it’s worry that got us to ensure they didn’t win. But I am suggesting that your information is very out of date and that you should do a better job of finding recent points to support your claim.

Also, I think this is off topic for this community and seems far more like political bait as some have pointed out.


“Your application” - the customers you mean. Our DB definitely does it’s own rate limiting and it emits rate limit warnings and errors as well. I didn’t say we advertised infinite IOPs that would be silly. We are totally aware of the scaling factors there and to date IOPs based scaling is rarely a Sev1 because of it. (Oh no p99 breached 8ms. Time to talk to Mr customer about scaling up soon)

The problem is that the resulting cluster is so performant that you could load in 100x the amount of data and not notice until the disk fills up. And since these are NVME drives on cloud infrastructure, they are $$$.

So usually what happens is that the customer fills up the disk arrays so fast that we can’t scale the volumes/cluster fast enough to avoid stop-writes let alone get feedback from the customer in time. And now that’s like the primary reason to get paged these days.

We generally catch gradual disk space increases from normal customer app usage. Those give us hours to respond and our alerts are well tuned. It’s the “Mr. Customer launched a new app and didn’t tell us, and now they’ve filled up the disks in 1 hour flat.” that I’m complaining about.


It is definitely an under provisioning problem. But that under provisioning problem is caused by the customers usually being very very stingy about what they are willing to spend. Also, to be clear, it isn’t buckling. It is doing exactly The thing it was designed to do. Which is to stop writes to the DB since there is no disk space left. And before this time, it’s constantly throwing warnings to the end user. Usually these customers tend to ignore those errors until they reach this stop writes state.

In fact, we just had to give an RCA to the c-suite detailing why we had not scaled a customer when we should have, but we have a paper trail of them refusing the pricing and refusing to engage.

We get the same errors, and we usually reach out via email to each of these customers to help project where their data is going and scale appropriately. More frequently though, they are adding data at such a fast clip that them not responding for 2 hours would lead them directly into the stop writes status.

This has led us to guessing what our customers are going to end up at. Oftentimes being completely wrong and eating to scale multiple times.

Workload spikes are the entire reason why our database technology exists. That’s the main thing we market ourselves as being able to handle (provided you gave the DB enough disk and the workload isn’t sustained for a long enough to fill the discs.)

There is definitely an automation problem. Unfortunately, this particular line of our managed services will not be able to be automated. We work with special customers, with special requirements, usually fortune 100 companies that have extensive change control processes. Custom security implementations. And sometimes even no access to their environment unless they flip a switch.

To me it just seems to all go back to management/c-suite trying to sell a fantasy version of our product and setting us up for failure.


That is exactly what we do. The problem is that as a managed service offering. It is on us to scale in response to these alerts.

I think people are misunderstanding my original post. When I say that customer cluster will go into stop writes, that does not mean it is not functional. It is an entirely intended function of the database so that no important data is lost or overwritten.

The problem is more organizational. It’s that we have a 5 minute SLA to respond to these types of events and that they can happen at any random customer impulse.

I don’t have a problem with customers that can correctly project their load and let us know in advance. Those are my favorite customers. But they’re not most of our customers.

As for automation. As I had exhaustedly detailed in another response, we do have another product that does this a lot better. And it’s the one that we are mass marketing a lot more. The one where I’m feeling all the pain is actually our enterprise level managed service offering. Which goes to customers that have “special requirements” and usually mean that they will never get as robust automation as the other product line.


Our database is actually pretty graceful. It just goes into stop writes status. You can still read any data and resolving the situation is as easy as scaling the cluster or removing old records. By no means is the database down or inoperable.

Essentially our database is working as designed. If we rate limited it further then we have less of a product to sell. The main feature we sell of our database technology is its IOPS and resiliency.

Further, this is just for a specific customer, it has no impact to any other customers or any sort of central orchestration. Generally speaking the stop writes status only ever impacts a single customer and their associated applications.

Also, customers can be very stingy with the clusters they are willing to buy. We actually are on poor terms of the couple of our customers who just refuse to scale and just expect us to magic their cluster into accepting more data than its sized for.


Probably not feasible in our case. We sell our DB tech based on the sheer IOPS it’s capable of. It already alerts the user if the write-cache is full or the replication cache is backing up too.

The problem is, at full tilt, a 9 node cluster can take on over 1GB/s in new data. This is fine if the customer is writing over old records and doesn’t require any new space. It’s just that it’s more common that Mr. customer added a new microservice and didn’t think through how much data it requires. Thus causing rapid increase in DB disk space or IOPs that the cluster wasn’t sized for.

We do have another product line in the works (we call it DBaaS) and that can autoscale because it’s based on clearly defined service levels and cluster specifications. I don’t think that product will have this problem.

It’s just these super mega special (read: big, important, fortune 100) companies have requirements that mean they need something more hand-crafted. Otherwise we’d have automated the toil by now.


As an SRE, what do I do about Alerts caused almost entirely by poor customer communication or misuse of a product?
A bit more context there since you might wonder why customers can cause Sev1's. Well, I work for a Database Technology company and we provide a managed service offering. This managed service offering has SLA's that essentially enforce a 5 minute response time for any "urgent" issue. Well, a common urgent issue is that the customer suddenly wants to load in a bunch of new data without informing us which causes the cluster to stop accepting write loads. It's to the point where _most_ if not all urgent pages result in some form of scaling of the cluster. Since this is a customer driven behavior, there is no real ability to plan for it - and since these particular customers have special requirements (and thus, less ability to automate scaling operations), I'm unsure if there is any recourse here. It's to the point that it doesn't even feel like an SRE team anymore - we should just instead be called "On-demand scaling agents". Since we're constantly trying to scale ahead of our customers. All in all, I'm starting to feel like this is a management/sales level issue that I cannot possibly address. If we're selling this managed service offering as essentially "magic" that can be scaled whenever they need then it seems like we're being setup for failure at the organizational level. Not to mention, not being smart about costs behind scaling and factoring that into these contracts. So, fellow SRE's have you had to have this conversation with a larger org? What works for something like this? What doesn't? Should I just seek greener pastures at this point? P.S. - Posted c/Programming due to lack of a c/SRE
fedilink

I’d like to report in as someone at the end of that process and is actually making good money.

Now I need:

More time to hang out with friends and family. 🥲


As a man who grew up with one foot firmly planted in yeehaw and the other in yuppie, I think this is brilliant!


I don’t get it either. My brother-in-law is like this. And he refused to take his kids to see Buzz Lightyear because of its “political” nature. I was a dumbfounded when I heard that. To think that representation is just some nebulous political aim.

At this rate, we should just consider any media with a kiss in it “political media.”

And I even grew up with this dude in the early 2000s. He didn’t seem like this before.

I try to forget about the guy, but it’s kind of hard because he won’t let me see the nieces because I’m too “liberal”.


I agree. I think 1440p+HDR is probably the way to go for now. HDR is FAR more impactful than a 4K resolution and 1440p should give you a stable 45ish FPS on Cyberpunk 2077 completely maxed out on an RTX 3080Ti (DLSS Performance).

And in terms of CPU, the same applies. 16 cores are for the gentoo using, source compiling folks like me. 8 cores on a well binned CPU from the last 3 generations goes plenty fast for gaming. CPU bottlenecking only really show up at 144fps+ in most games anyways.


Agree, most mainstream distros have it all handled for the most part and it normally “just works”.

Now, myself on Gentoo testing on the other hand… Sometimes I shoot myself in the foot and forget to rebuild my kernel modules and wind up needing to chroot to fix things - all because I have an NVidia card.


I knew from the thumbnail that this is Virtue by Overwerk. Excellent track!


As somebody with autism. I find this take lacking nuance. You see for me these tools represent a huge leap and accessibility for me. I can turn a wall of stream of consciousness text into something digestible and represents myself.

I find myself constantly exhausted with the societal expectation that I review, edit, and adjust my own speech constantly. And these tools go a long way to helping me actually communicate.

I mean, after all nothing changes for me. People thought of me as a robot before. And I guess they can continue to think I’m still a robot. I’ve stopped giving a crap about neurotypical expectations.


I mean I take a less extreme take. But I definitely resonate. As somebody with autism, it’s really nice to have an impartial chat assistant to turn my stream of consciousness wall of text into something far more digestible. Trying to do so myself often takes hours to construct a message a couple paragraphs long. Where I checked and double check and triple check for anything that might offend somebody or come across strange or not flow well. Etc etc etc.

A lot of these articles don’t really investigate the accessibility aspect of these tools. And I really wish they did. I know if one of my friends used chatgpt to help with their messages, I would be completely fine with it.


I’m the instance admin of Tucson.social and I support this message.

You see, Lemmy is steeped in what I like to call “Tech bro culture” - maybe not the original devs, but definitely the community that espouses these “tips”. These folks, despite their education, often fail to understand how non-technical people think, or even just how less technical (but completely competent) folks think.

Let me tell you what it requires to host an instance:

  • Intermediate Linux Skills
  • Basic to Intermediate Docker Skills
  • Intermediate to Advanced Networking skills
  • Intermediate to Advanced Information Security Skills
  • LOTs of Time, especially when no one else wants to moderate or administrate.

And that’s just the TIP of the iceberg. Sure you can run a completely private instance that negates the need for heavy moderation, but you still need to protect that instance and make sure it works from a wide range of devices and networks.

So yeah, we see many instances that were created that are now dead or dying because the instance admins didn’t know they needed DDOS protection, or CAPTCHA, or any number of security tools, and now they are at the whim of bad actors or simply couldn’t keep up with the poorly documented changes that have now broken their instance.

Then, once you get past that issue, and you have a popular instance, then Regulatory Compliance becomes an issue. This is intrinsically linked to the ability to moderate the content. Sure, there are ways to automatically report illegal content, but in say, a NSFW community that’s a never ending battle that could could end up with a subpoena or 10.

So yeah, I recommend anyone who isn’t a seasoned Infrastructure / DevOps / InfoSec / Full Stack Engineer stay away from creating their own instances for now because those that do end up creating “Bot Bastions” that make the fediverse worse, not better.


I dunno what this GM is doing but I find that ChatGPT (GPT4 particularly) does wonderfully as long as you clearly define what you are doing up front, and remember that context can “fall off” in longer threads.

Anyways, here’s a paraphrasing of my typical prompt template:

I am running a Table Top RPG game in the {{SYSTEM}} system, in the {{WORLD SETTING}} universe. Particularly set before|after|during {{WORLD SETTING DETAILED}}.

The players are a motley crew that include:

{{ LIST OF PLAYERS AND SHORT DESCRIPTIONS }}

The party is currently at {{ PLACE }} - {{ PLACE DETAILS }}

At present the party is/has {{ GAME CONTEXT / LAST GAMES SUMMARY }}

I need help with:

{{ DETAILED DESCRIPTION OF TASK FOR CHAT GPT }}

It can get pretty long, but it seems to do the trick for the first prompt - responses can be more conversational until it forgets details - which takes a while on GPT4.


Sometimes I wonder if the early version of the internet (the one that millenials grew up with) were too accepting of the “online edgelord” mentality. You know, the people who don’t believe their own words, just spouting stuff because it makes them look edgy and cool. Like, I know a younger me thought being edgy was cool, and I took that version of myself to online spaces - it wasn’t shut down like it should’ve been. However, I did end up growing out of it, only to realize my old friends never did. Even in their 30’s they still act like “top kek memelords” and are some of the saddest and loneliest people I know.

It kinda made me realize that “grown up people” online need to NOT put up with that crap. Like, zero tolerance, “Oh, your being an edge lord today? Temp ban - come back when you grow up”.

These same people, that were my friends back in high-school days often feel “persecuted” when they can’t be an edgelord anymore. After all, it was just SO NORMAL before. “It’s just a joke bro!”. And now every time they interact with society it’s through a lens of persecution because they can’t be as edgy as they want anymore.

THEN it get’s to bad faith bullshit as external bad actors feed the narrative that they “get” to be an edgelord and that’s what freedom of speech means - which then becomes a slide into alt-right and incel territory.

It’s exhausting, and honestly, I have a bit of myself to blame here - when I was more accepting of that type of behavior rather than pushing back on it. I even think that extends to the larger millenial cohort as well. We just kind of “accepted” 4-chan and the trash that came out of it for so long that many just feel entitled to be an edgelord these days.


I mean, I use beehaw’s defederation list as a basis for mine over at tucson.social - I’d love to make it more automated so that I don’t have to copy-and-paste it over itself every few days (that list changes almost daily now).


The theory I have, and it’s something I want to test with tucson.social, is that a democratic community will ONLY work with local stakeholders. Internet randos will always ruin the democratic makeup of a community since they can be from anywhere and have conflicting allegiances. However, by ensuring that an online community is a mirror of the local community, there is a deeper respect for Democratic norms because the participants are actually a part of the community they affect. At least, that’s my theory anyways. There will certainly be other problems in this model, but I believe it may be the only real way that an online community can self govern without falling prey to internet extremism.

Heck, all the talk I see about TOR concerns me. Like, what are you wanting to do on tucson.social that requires TOR? I get it for online-only communities that are meant for a global audience, but for something hyper local and meant for people who are (arguably) not oppressed by government restrictions on free speech, I just don’t see the point.

Anyways I shall see if this even works. Perhaps a year later I’ll be writing a post-mortem on the failure of tucson.social at the hands of extreme members of the community - or maybe I’ll make an exuberant post about it’s success in self-moderating/self-administration?


Don’t forget to take it easy from time to time.

I think us Admins/Instance Owners could benefit from a sort of social federation ourselves. A sort of collection of trusted people and organizations who share similar visions of what an online community should be. Early in this platform such connections could serve as sources of advice, help, solidarity as we venture into uncharted territory.


Thank you for the measured take on this.

You are correct, I don’t intend to pressure or cause harm! But I certainly see the results, and it is indeed pressure. As another commenter pointed out, there are many instance admins who work a bit closer to the team on the Matrix chatrooms and that’s their preferred method of communication. Now that I know this, I’ll let things cool down and join myself. I definitely intend to contribute where I can in the codebase, and I wouldn’t dream of escalating to public pressure for smaller concerns.

However, I have a slight, and perhaps pedantic disagreement about making changes. In this case, the request was for not making a change. If it weren’t for the fact that the feature was already ripped out it would be as simple as not removing it (or in this case re-working it a bit). I understand that it isn’t the current reality, and that it required work to revert - and if not for a ton of spambots, I think It would’ve been easier to adapt.

Ultimately it will take time to discuss workarounds and help others implement them, and the deadline is ultimately the arrival of the version that drops the older captcha (or was, in this case - it’s getting merged back in as we speak - might even be done now). With that reality, I had a sense that this could be an existential problem for the early Threadiverse.

I definitely didn’t intend to suggest that the Devs were in any way at fault here. I read the github issues enough to come with the takeaway that a quick (relative to a new feature) reversion to the prior implementation. To me the feedback they were receiving seemed to be “Admins and devs alike are okay moving forward and opinions to the contrary are minimal, let’s move forward”. It was definitely intended to be a way to communicate using raw numbers (but not harassment). I’d like to think I’m fairly pragmatic in that if it IS working for folks, then that is a contrary opinion, and that it was missing.

Where I definitely failed was my overly emotional messaging. It’s certainly not an excuse, but my recent autism diagnosis does at least help explain why I have an extremely strong sense of justice and can sometimes react in ways that are less than productive in some ways.

As for the licensing, I agree! I’m talking to some good friends of mine because I want to take my instance WAY further than most others - goal is a non-profit that answers to Tucsonans and residents of larger Pima county rather than someone not in the community. There’s just a lot of features this concept would need that it might diverge so much from the Lemmy vision that it needs to be something new - and hopefully a template for hyper-local social networks that can take on Nextdoor.


Oh! I just remembered something. Isn’t there a site that recommends a lemmy instance? Might it make sense that multiple users found your website because they change the recommendation to distribute new users? Does that sort of pattern hold in this case?


5 huh? That’s actually noteable. So far I haven’t seen a real human user take longer than a couple of hours to validate. Human registrations on my instance seem to have a 30% attrition. That is, of 10 real human users, I can reasonably expect that 3 won’t complete the flow. It seems like your case might be nearing 40-50% which isn’t unheard of but couple this with the quickness that these accounts were created - I think you are looking at bots.

The kicker is, though, if one of them IS a real user, it’s going to be almost impossible to find out.

This is indeed getting more sophisticated.

I wish I could see this time period on a cloudflare security dashboard, I’m sure there could be a few more indicators there.


Guess I best get over there then. Sounds like a place to voice my concerns without resorting to public appeals.

You just said you’re only interacting with a small group of independent admins, but now you’re making a conflated statement of “many Admins”.

I can be working with a small set of independent instance admins (brought together by a newer instance and discussions mostly through discord) and I’ve helped them test a few things and our little discord meta-community is already constructing new features, auto-posting bots of different types (RSS feeds, even posts, etc), and a few other things.

However, this is different from “Most Admins” where my interactions are largely based in the meta/support channels for other instances. This is a much more confusing population to me since many were exposed to the entire “Lemmy is for Authoritarian Communists” that was making the rounds on reddit. It’s resulted in a newer cohort of Admins that aren’t nearly as friendly to the development team.

The only reason you got what you wanted in the end was because someone else put in the work to make it happen

Nah, I would’ve made the change myself, but it wouldn’t do a darn thing because it depends on the inherent security of less technical admins. This project is as much impacted by individual decisions as they are collective ones.

And until the maintainers changed their mind, they likely wouldn’t have allowed a resurrection of the old Captcha anyways - so your point about another person “doing the work” only was really possible once the maintainers communicated that it was acceptable. Because, as stated in my previous point, an individual instance with this change (reverting captcha) doesn’t protect them from instances that don’t.

This all points back to my original point which revolves around new admins understanding the importance of engaging the maintainers and making themselves heard. The fact that people who already do this took offence to my post is a little bizarre because I’m clearly not talking about the people who haven been communicating.

Sure, those who’ve been with the Fediverse for a bit are familiar with Matrix and how to use it to communicate back to the core developers. But the new influx of instances and their admins either A - don’t know where to go, B - don’t care, or C - are so ideologically opposed to the rumors they want nothing to do with them.


Huh, that is interesting, yeah, that pattern is very anomalous. If you have DB access you can try to run this query to return all un-verified users and see if you can identify if the email activations are being completed:

SELECT p.id, p.name, l.email FROM person AS p LEFT JOIN local_user AS l ON p.id=l.person_id WHERE p.local=true AND p.banned=false AND l.email_verified='f'


Not so sure on the LLM front, GPT4+Wolfram+Bing plugins seems to be a doozy of a combo. If anything there should be perhaps a couple interactable elements on the screen that need to be interacted with in a dynamic order that’s newly generated for each signup. Like perhaps “Select the bubble closest to the bottom of the page before clicking submit” on one signup and “Check the box that’s the furthest to the right before clicking submit”?

Just spitballin it there.

As for the category on email address - certainly not suggesting they remove supporting it, buuuuutttt if we’re all about making sure 1 user = 1 email address, then perhaps we should make the duplication check a bit more robust to account for these types of emails. After all someuser+lemmy@somedomain.com is the same as someuser@somedomain.com but the validation doesn’t see that. Maybe it should?


The language of your post was quite hostile and painted (and continues to paint) the developers as being out of touch with instance admins. The instance admins are already “loud, clear and coordinated”, and are working in full communication with the maintainers.

Right now the instance admins that I’m working with are largely independent with only a couple of outliers. The newer instances that have just joined the fediverse didn’t really echo back their concerns. So while you’re statement might be true (I dunno, I don’t see any coordination, and it’s not always clear what admin concerns are important.) the rapid growth has brought even more stakeholders and admins to the fediverse. Some far less technical than others. I’m going to need more proof of deeper coordination, because as it stands many Admins say “Devs are tankies” and refuse to federate with the maintainer’s instance, let alone contribute code or money.

The majority of PR’s coming into the project are coming from instance admins seeking to solve their personal pain points. Both the issue and the PR you’re referring to were created by ruud…

This is a new phenomenon, the total lines of code written by the primary devs are still much larger than any other combination of PRs. I don’t envy the position of having to sort through thousands upon thousands of PRs that may or may not coincide to the project’s vision or code quality standards. Rolling back to a known prior state is almost always lower effort than minting a fresh new implementation.

Also, ruud did not create the PR I’m referring to, that honor goes to TKillFree. Heck, why do you think I’m attacking the author here rather than trying to bring more weight to his Github issue? It’s because of ruud that I even know what’s going on - and the instance admins I know were pretty clueless about the pending change.

I’ll grant you that my tone and signalling needs work, but I do think that an attempt to rally more folks did indeed influence the solutions that the maintainers were willing to accept. From “New, better implementation only - remove the existing flawed one now” to “Okay we can keep the flawed method, but we need an enhanced version and soon”.

At this point its hard to tell because we don’t live in a universe where I didn’t make that post to compare. Maybe you’re right and this would’ve all shaken out eventually.


Hmmm, I’d check the following:

  1. Do the emails follow a pattern? (randouser####@commondomain.com)
  2. Did the emails actually validate, or do you just not see bouncebacks? There is a DB field for this that admins can query (i’ll dig it up after I make this high level post)
  3. Did the surge come from the same IP? Multiple? Did it use something that doesn’t look like a browser?
  4. Did the surge traffic hit /signup or did it hit /api/v3/register exclusively?

With those answers I should be able to tell if it’s the same or similar attacker getting more sophisticated.

Some patterns I noticed in the attacks I’ve received:

  1. it’s exactly 9 attempts every 30 minutes from the user agent “python/requests”
  2. The users that did not get an email bounceback were still not authenticated hours later (maybe the attacker lucked out with a real email that didn’t bounce back?). There was no effort to verify from what I could determine.

Some vulnerabilities I know that can be exploited and would expect to see next:

  1. ChatGPT is human enough sounding for the registration forms. I’ve got no idea why folks think this is the end-all solution when it could be faked just as easily.
  2. Duplicate Email conflicts can be bypassed by using a “+category” in your email. ie (someuser+lemmy@somedomain.com) This would allow someone to associate potentially hundreds of spam accounts with a single email.

I’m confused - that’s almost exactly what I said, albeit in a very condensed form.

Once you take a Discretionary bonus and then make it into an incentive (i.e. This year the Christmas bonus must be earned by doing X, Y, Z) and adding stipulations to the bonus that are tied to worker output turns it into a non-discretionary bonus.

Promissory Estoppel is the basis for why non-discretionary bonuses are a category. There is a perceived promise of a bonus that people work for, but then are denied which can cause knock-on effects for the people to whom that bonus is owed. A bonus is discretionary up until the point it’s used to get people to work longer or perform better.

Sure the general term is Promissory Estoppel, but that’s a much weaker regulatory framework than Pay and Labor laws around non-discretionary bonuses.

If there is something else I’m not understanding here please enlighten me further. If it’s not “accurate” I invite you to help me be more accurate.


Eh, this situation seems more like the “admins”/power users of the software saying “How can you not need us?” - and for them, that’s more of a point. These are the people who submit bug reports, code features or plugins on a weekend, and generally turn your one product into a rich ecosystem of interconnected experiences. One can argue that the project doesn’t technically require their participation, but they do enhance the project in many different ways.

open-source entitlement is a thing, but I’m not sure that this is the same thing. I for one would be happy to submit changes (and even have a couple brewing for my own use on my instance). Just don’t make the spam problem worse in the meantime by pushing out a version that’s missing a crucial (if imperfect) feature.


You won’t see me making call to action posts for undelivered features or other small-fry items. I’m a dev, I get it.

But there are always times were vulnerabilities come up and a dev might not otherwise know that it’s being exploited. It’s one thing to have a feature to fix that vulnerability and get to it as part of your own priority list. It’s another when that vulnerability is actively impacting the people using the software - that’s when getting vocal about an issue is appropriate to help me alter my priorities, IMO.


Looks like someone already opened a PR to roll back to a retrofitted solution (I had to wait until the weekend before I could find the time to work on this).

The devs are willing to accept a retro-fitted captcha (rather than just mCaptcha) in time for v0.18 and they communicated as such about 9 hours ago (for me). So for me, my push for visibility is complete unless they block the incoming PR for whatever reason. The devs have been made aware that this is contentious and the community could be impacted negatively and they see the need for it.

For me, that indicates that the Lemmy devs will listen to key, important issues, that impact the health of the larger fediverse as long as the community is clear about what the largest issues actually are.

A lot of folks here characterized me as someone wanting to “brigade”, but that’s not quite true. I just know that sometimes developers don’t know what’s going on with admins unless the admins are loud, clear, and coordinated. That doesn’t mean that I was asking folks to “force” the devs to do anything or be abusive, just that enough feedback might convince them to see things from a different perspective than a perfect technical solution.


Sure, I agree that the current implementation isn’t the most robust in stopping all conceivable bots. Heck, it’s quite poor as some others have pointed out.

The reality is, though, that it is currently making a difference for many server admins, now, today.

Let’s use a convoluted metaphor!

It’s as if each lemmy instance has some poorly constructed umbrellas (old captcha). Now a storm has arrived (bot signups) and while the umbrella is indeed leaky, but the umbrella operator is not as wet as they would be without it. Now imagine that these magical, auto-upgrading umbrellas receive an update during this storm that removes the fabric entirely while they work on making a less leaky solution. It would be madness right? It’s not about improving on the product, that’s desired and good! It’s about making sure the old way of doing things is there until the newer solution is delivered and present.

As a user of this “magical umbrella”, I’d be scrambling because the sudden removal of a feature that was working (albeit poorly and imperfectly) doesn’t exist at all anymore. Good thing I have a MUCH bigger umbrella that I pay $$$ for (cloudflare) to set-up in the meantime. However this huge umbrella is too big, and if I don’t cut some holes in it, it’ll be to “dark” to function. So not even this solution is perfect.


Fun fact, I purposefully goaded the bots into attacking my instance.

Turns out they aren’t even using the web form, they’re going straight to the register api endpoint with python. The api endpoint lives at a different place from the signup page and putting a captcha in front of that page was useless in stopping the bots. Now, we can’t just challenge requests going to the API endpoint since it’s not an interactive session - it would break registration for normal users as well.

The in-built captcha was part of the API form in a way that prevented this attack where the standard Cloudflare rules are either too weak (providing no protection) or too strong (breaking functionality).

In my case I had to create some special rules to exclude python clients and other bots while making sure to keep valid browser attempts working. It was kind of a pain, actually. There’s a lot of Lemmy that seems to trip the optional OWASP managed rules so there’s a lot of “artisanally crafted” exclusions to keep the site functional.

Anyways, I guess my point is form interaction is just one way to spam sites, but this particular attacker is using the backend API and forgoing the sign-up page entirely. Hidden fields wouldn’t be useful here, IMO.


It looks like they decided to bring it back in time for the next release! - https://github.com/LemmyNet/lemmy/issues/3200#issuecomment-1600505757

They specifically mentioned the feedback in the ticket and it goes to show how collective action can work.

Despite how others felt that I was trying to start a “brigade” - I was only trying to raise awareness by being collectively vocal. I never asked folks to abuse devs or “force” them to do something. I asked them to make their concerns known and let the devs choose. It’s just that when I posted there were far less comments, and if I were the developer I wouldn’t know that this issue is important to a lot of people - at least just looking at the github issues anyways.


I dunno Mr. Google, but I’m fairly sure Azure won’t decide to sell of their domain registrar out from underneath their customers.

I’m fairly certain that Azure won’t drastically update the “packages” to buy ever 6 months like GCP/Gcloud did.

I’m fairly certain, that given the track record of Google products and services, that this has nothing to do with Azure being “anti-competitive” and everything with Google being known for axing their own products. If I build something on GCP, I can’t trust that it will continue to run unattended. I know I’ll need to always keep my eyes on the news feed should Google axe another product I was using.


I’ll be the odd one out and say I support this model but for other reasons than the technical limitations and scaling problems involved. For me it’s more about trying to establish a tighter ring of trust and enable easier user onboarding as the hub could serve as the primary identity store for users on multiple instances.

I mentioned it in some chat earlier, but I think that the Beehaw.org moderation model, goals, and philiosophy serves as an excellent starting point for like-minded communities to build out the hub-and-spoke. It would also give them greater flexibility in maintaining the health of their corner of the fediverse by centralizing identity with them.

This model would, of course, not stop others from creating their own hub and spoke and would break apart the fediverse a bit, so I suppose there should be a way for “hubs” to talk to eachother in a way that resembles what we have now.

From a blocking bad actors standpoint (I’m still upset about Captcha getting removed even if it’s a technically inferior solution), it would be far easier to have fewer hubs to need to blacklist/whitelist than having to do it for each individual instance.

I guess to go a bit further, if Lemmy could support both “modes” (as in it can be configured to be hub and spoke as either the hub or spoke, as well as retain the existing functionality for those who don’t want a hub) that would be ideal.


In U.S. law there are generally speaking, two types of bonuses.

Non-Discretionary - A.K.A any bonus that doesn’t take into account discretion on part of management and higher. This is usually for bonuses that apply as an “incentive” and have requirements to achieve. Think sales targets for sales teams, on-call incentive structures, and more. This type of bonus is actually considered part of your wage.

Discretionary - A.K.A. any bonus that is paid at the discretion of company ownership. Notably these are bonuses that are not typically communicated in advance, and thus an employee wouldn’t know to expect them. They might still expect them from “tradition”, but if the only time you ever know about a holiday bonus is when it arrives, it’s likely Discretionary. These bonuses aren’t guaranteed by anyone - and an employer can indeed to choose not to pay these types of bonuses.

It seems that Twitter failed to pay a non-discretionary bonus and there’s a large paper trail of incentives given to employees for this bonus. I really hope the DOL makes an example of them on this case.


Admins, we’re about to have a really bad SPAM problem when Lemmy removes captcha support in v.0.18 - You ALL have a responsibility to communicate back to lemmy devs to try to stop it.
Look, we can debate the proper and private way to do Captchas all day, but if we remove the existing implementation we will be plunged into a world of hurt. I run tucson.social - a tiny instance with barely any users and I find myself really ticked off at other Admin's abdication of duty when it comes to engaging with the developers. For all the Fediverse discussion on this, where are the github issue comments? Where is our attempt to convince the devs in this. No, seriously WHERE ARE THEY? Oh, you think that just because an "Issue" exists to bring back Captchas is the best you can do? NO it is not the best we can do, we need to be applying some pressure to the developers here and that requires EVERYONE to do their part. The Devs can't make Lemmy an awesome place for us if us admins refuse to meaningfully engage with the project and provide feedback on crucial things like this. So are you an admin? If so, we need more comments here: https://github.com/LemmyNet/lemmy/issues/3200 We need to make it VERY clear that Captcha is required before v0.18's release. Not after when we'll all be scrambling... EDIT: To be clear I'm talking to all instance admins, not just Beehaw's. UPDATE: Our voices were heard! https://github.com/LemmyNet/lemmy/issues/3200#issuecomment-1600505757 The important part was that this was a decision to re-implement the old (if imperfect) solution in time for the upcoming release. mCaptcha and better techs are indeed the better solution, but at least we won't make ourselves more vulnerable at this critical juncture.
fedilink

Damn it! Now I have to move all my domains.
fedilink

I think federation of city/state focused instances would be ideal. Perhaps with Beehaw at the center. A hub and spoke model, if you will.
The problem to federation is that there is at least some complexity to subscribing to other communities - at least right now. I love it here, but I have to be honest, it is a bit difficult to navigate for most. But perhaps we're missing the point? What if we should be trying to sell the local aspect harder? There is a demand for hyper-local networks as evidenced by Nextdoor. Couple this with an increase in people wanting to have a better digital commons - one not controlled by a single corporation. For my instance tucson.social, I'm going to get some signs printed and do some local advertising. Pretty sure if I sneak some signs around the U of A campus, but mostly in public spaces where it's legal and proper. I think that the local nature of all of this makes advertising a bit more effective and locally relevant. I don't want to sell it as a "reddit" but as another "place" that will exist whether or not reddit does (due to the non-profit or whatever other org we construct) to talk about our city. At the same time I do this, I want to reach out to important community members to see if they might be interested in donating once I have a formal non-profit. I'd make sure to emphasize the utility such a site might have to local businesses once advertising is possible. I have no idea if this will all work, but it's something I'm trying to do anyways. I just believe communities should be local, and that the online representation of them is as close to a mirror of the local one as possible. I also want to foster conversations that help people grow and connect. So maybe, one of the admins here sees this, because I'd really like to join forces in a more meaningful way and hopefully gain the ability to deliver a meaningful experience to all sorts of communities. My immediate skills are technical, and perhaps if I follow along and learn your tips along the way, I can have the best possible chance of making this happen - and constructing the blueprints for others to follow. I'm also open to other wisdom from the beehaw community! Have you done local marketing and advertising? I could use some tips. Have you formed a non-profit before? I'd definitely like to hear from you! Are you in Tucson and want to get more deeply involved? - dm me! The only thing I'd prefer not to hear is how difficult it is. I'm fully aware that I've chosen to go all in on terrible odds. I don't really care anymore. lol
fedilink