Salamander
  • 4 Posts
  • 39 Comments
Joined 2Y ago
cake
Cake day: Dec 19, 2021

help-circle
rss

Thank you for your hard work!!

I appreciate that you going through this test period. I hope it all goes smoothly and that at least a few hairs remain on your heads by the end of this week. Good luck!


Oh, I had not noticed that page. I was hoping getting the NiDAQmx (https://www.ni.com/en/support/downloads/drivers/download.ni-daq-mx.html#494543) driver installed would be enough. So this means that even if I succeed it might still not support it 😅

I was quite naive when selecting this card. I knew nothing about PCIe and I figured it would be a very simple matter to read out the values…

I just found some videos about writing PCIe drivers from scratch, but since I know nothing about PCIe I have no idea about the level of complexity that it would take to reverse-engineer. I suspect that it might be a very difficult thing to do, maybe even practically impossible.


Has anyone here managed to interface a National Instruments DAQ card with Arch?
I purchased a PCIe DAQ card from National Instruments (PCIe-6536B), and I have struggled trying to get their proprietary drivers installed so that I can interface with the card using the NI-DAQmx library in Python. I am considering giving up on it. Has any of you worked (or tried to work) with these cards in Arch? If you can share how you managed I would appreciate it. But, really, even knowing that someone has succeeded would be enough to motivate me to continue trying. And knowing that others have also struggled and failed would help me confirm that National Instruments is not the way to go.
fedilink

what was /post/1? what are they hiding??

The plot thickens…



Where are you at? What is your problem specifically?



Yeah, I still see the line now. I am not sure if this was a one-off, maybe the edit occurred when I rebooted the instance for a moment and the edit fell through the cracks… Or there might be an actual issue federating edits.


I think you left this line behind by accident:

l = Lemmy(INSTANCE_URL)


My view is: I don’t like this cultural element, and I am glad that I live in a country without it. But if I am a visitor from abroad I would not resist the local culture and try to impose my own values. If I am aware of this cultural element and I dislike it, my options would be to either avoid restaurants and other tipping situations as much as I can, or simply account for the tip when making my financial decisions, and pay it.

If I live in the country then it is different, because then I am more entitled to be a driver of change. Personally, my approach would be to support businesses with explicit no-tipping policy, and to refuse receiving tips myself.


I have just tested by uploading/re-downloading an image, and the EXIF data is removed.

I then looked through the Lemmy issues and found this issue related to the image-uploading back-end (pict-rs) removing the EXIF data. In response to this issue, the developer of pict-res (asonix) comments that striping the EXIF data was one of the original motivations for building the uploader.

I am not sure about how to search through the source code of pict-rs, and it seems like this step is not properly documented in the readme file, so I have not been able to find exactly where the metadata removal operation takes place. I think that this is done by invoking ‘exiftool’.


I would kill. 2X growth rate is too fast, and it is easily better 100 random people now than 200 immediately after.

What about these rules?

  • The group of people in the tracks is randomized every time.

  • The group always includes the person that the current decision maker loves the most.

  • The choice is to kill, or to increase the number of people in the kill group by one.

  • If the number of humans available reaches the population number, everyone dies.

  • The list of every decision made by every decision maker is public knowledge.

  • You are the first decision maker.


I can see them from the computer browser and my phone’s browser (image below), not from Jerboa. At the moment I can only see emojis from my instance, but maybe as other instances add their own images they will become visible too.


In the admin settings there is a tab that allows admins to add an icon. So far I added :lemmy_hearts: and :mander: to test. I am posting from Jerboa now and the custom icons did not get invoked. I don’t know yet if you can also invoke them from a different instance.

What I did notice is that the icon gets fixed to a size that is bigger than a normal icon, and the upload button doesn’t seem to be working - i had to feed the form the direct url to an image.


What do you mean? What is being used?


Wuhuu! Thank you and congratulations!!


EDIT: Sorry, I misunderstood this question ~~ I have a raspberry pi connected to a 1 TB SSD. This has the following cron job:

00 8 * * * /usr/bin/bash /home/user/backup/backup.sh

And the command in backup.sh is:

rsync --bwlimit=3200 -avHe ssh user@instance-ip:/var/www/mander/volumes /home/user/backup/$(date | awk '{print $3"-" $2 "-"$6}')

In my case, my home network has a download speed of 1 Gbps, and the server has an upload speed of 50 Mbps, so I use -bwlimit=3200 to limit the download to 25.6 Mbps, and prevent over-loading my server’s bandwidth.

So every morning at 8 am the command is run and a full backup copy is created.

It seems that you have a different problem than me. In your case, rather than doing a full copy like me, you can do incremental backups. The incremental backup is done by using rsync to synchronize the same folder - so, instead of the variable folder name $(date | awk ‘{print $3"-" $2 “-”$6}’), you can simply call that instance_backup. You can copy the folder locally after syncronizing if you would like to keep a record of backups over a period of a few days.

On a second thought, I would also benefit from doing incremental backups and making the copies locally after synchronizing… ~~


  1. Not super easily. It can be done by querying the postgresql dabase, but there is no built-in method to do it using the browser interface at the moment. When anyone from any instance does report them, you will see the report.

  2. Someone please correct me if I am wrong. But, as far as I am aware, if you purge a user from your instance, that action is federated to every other instance - so if you respond quickly to these reports, other instance’s admins will not need to deal with them themselves. It is only when you perform an action on a user from a different instance that the action is only local.


It may be an AI, or it can also be a real human that is lying. The point of the application filter is to significantly slow down these approaches to bring their impact to a more manageable level. An automated AI bot will not be able to perform much better than a human troll with some free time because any anomalous registration patterns, including registration spikes and periodicity, are likely to be detected by the much more powerful processor that resides in the admin’s head.

On the other hand, a catch-all domain e-mail, a VPN with a variable IP, and a captcha-defeating bot can be used to generate thousands of accounts in a very short amount of time. Without the application filter the instance is vulnerable to these high-throughput attacks, and the damage can be difficult to fix.


It is too easy to fake e-mails. You can set up a catch-all e-mail domain and spam the registration like that. I am not a fan of giving my e-mail nor collecting other people’s e-mails.

My current message contains the following:

Please leave a short message (a sentence or two is enough) stating why you would like to join this instance and I will accept your application as soon as possible. The purpose of this form is to filter out spam bots, not to judge your motivation for joining.

It is not about them writing an essay to be let in. It is a very effective strategy to weed out spam accounts being registered in masse. One step is to make sure that the user made a cohesive sentence that addressees the question, and the other step is to check whether there is a sudden spike of similar new applications. Even ignoring the actual text, it is useful to be able to monitor whether you getting rate-limited bursts of account creations, and having the ability to approve/deny allows you to respond with less effort than if they succeed at creating the accounts.


I think that you can add captions to images, like this:

![image caption](image url)

I wonder whether these tools would identify and read out the caption.


Salamander
creator
toLemmy Support@lemmy.mlCPU load spikes
link
fedilink
English
0
edit-2
5M

The spikes disappeared after I increased the RAM from 2 GB to 3 GB, and they have not re-appeared over the past few hours.

It appears like some some process was hitting the 2GB RAM limit - even though under normal use only about 800GB of RAM are allocated. At first I thought that the high amount of read IOPS might be due to the swap memory kicking into action, but the server has no allocated swap.

The postgresql container appears to fail when all of the RAM is used up, and it may be that the high CPU usage is somehow related to repopulating the dabase as it is restarted… But I would think that if this were the case I would see similar spikes whenever I reboot - and I don’t.

Conclusion: I am not sure why this happens.

But if anyone else notices these spikes it may be a RAM issue. Try increasing the RAM and see if they go away.


You don’t need to use a known redirect link. If the plan begins with a post that obtains 10,000 likes, I am sure the attacker can spend a small amount of effort and register a domain.


It makes it a little bit easier to do, but it is not difficult to replicate this effect without changing the URL in the title - using a redirected URL and changing the redirect address, for example.

I think that this small increase in the way this kind of attack can be delivered is more than counter-balanced by the convenience of having editable titles.


  • The comment thread began with a user in lemmy.world, and the instance was defederated from beehaw, so that comment and sub-comments were not fetched
  • The ‘comment ID’ is not shared by different instances, so each instance will assign every comment its own ID.
  • If you want to fetch the comment from a third instance, you would need to click the colorful ‘fedisymbol’ to get the original link, which is the one from the instance that the original user lives in.

CPU load spikes
Hello, In the last 24 hours my instance has been experiencing spikes in CPU usage. During these spikes, the instance is unreachable. ![](https://mander.xyz/pictrs/image/f58d6ba7-6a25-4871-b558-6352a8bb4e4c.png) The spikes are not correlated with an increase in bandwidth use. They are strongly correlated with IOPS spikes, specifically with read operations. ![](https://mander.xyz/pictrs/image/91bb0d2f-1d1d-4dfb-87d7-45dc9ee25cda.png) ![](https://mander.xyz/pictrs/image/e6dff6c1-1b90-4ac5-a028-dc01441fa607.png) Analysis of `htop` during these spikes shows /app/lemmy and the postgres database UPDATE operation as the potential culprits, with the postgres UPDATE being my main suspicion. Looking through the postgres logs at the time of these spikes, I think that this block may be associated with these spikes: ::: spoiler spoiler ``` 2023-06-19 14:28:51.137 UTC [1908] STATEMENT: SELECT "comment"."id", "comment"."creator_id", "comment"."post_id", "comment"."content", "comment"."removed", "comment"."published", "comment"."updated", "comment"."deleted", "comment"."ap_id", "comment"."local", "comment"."path", "comment"."distinguished", "comment"."language_id", "person"."id", "person"."name", "person"."display_name", "person"."avatar", "person"."banned", "person"."published", "person"."updated", "person"."actor_id", "person"."bio", "person"."local", "person"."banner", "person"."deleted", "person"."inbox_url", "person"."shared_inbox_url", "person"."matrix_user_id", "person"."admin", "person"."bot_account", "person"."ban_expires", "person"."instance_id", "post"."id", "post"."name", "post"."url", "post"."body", "post"."creator_id", "post"."community_id", "post"."removed", "post"."locked", "post"."published", "post"."updated", "post"."deleted", "post"."nsfw", "post"."embed_title", "post"."embed_description", "post"."embed_video_url", "post"."thumbnail_url", "post"."ap_id", "post"."local", "post"."language_id", "post"."featured_community", "post"."featured_local", "community"."id", "community"."name", "community"."title", "community"."description", "community"."removed", "community"."published", "community"."updated", "community"."deleted", "community"."nsfw", "community"."actor_id", "community"."local", "community"."icon", "community"."banner", "community"."hidden", "community"."posting_restricted_to_mods", "community"."instance_id", "comment_aggregates"."id", "comment_aggregates"."comment_id", "comment_aggregates"."score", "comment_aggregates"."upvotes", "comment_aggregates"."downvotes", "comment_aggregates"."published", "comment_aggregates"."child_count", "comment_aggregates"."hot_rank", "community_person_ban"."id", "community_person_ban"."community_id", "community_person_ban"."person_id", "community_person_ban"."published", "community_person_ban"."expires", "community_follower"."id", "community_follower"."community_id", "community_follower"."person_id", "community_follower"."published", "community_follower"."pending", "comment_saved"."id", "comment_saved"."comment_id", "comment_saved"."person_id", "comment_saved"."published", "person_block"."id", "person_block"."person_id", "person_block"."target_id", "person_block"."published", "comment_like"."score" FROM ((((((((((("comment" INNER JOIN "person" ON ("comment"."creator_id" = "person"."id")) INNER JOIN "post" ON ("comment"."post_id" = "post"."id")) INNER JOIN "community" ON ("post"."community_id" = "community"."id")) INNER JOIN "comment_aggregates" ON ("comment_aggregates"."comment_id" = "comment"."id")) LEFT OUTER JOIN "community_person_ban" ON ((("community"."id" = "community_person_ban"."community_id") AND ("community_person_ban"."person_id" = "comment"."creator_id")) AND (("community_person_ban"."expires" IS NULL) OR ("community_person_ban"."expires" > CURRENT_TIMESTAMP)))) LEFT OUTER JOIN "community_follower" ON (("post"."community_id" = "community_follower"."community_id") AND ("community_follower"."person_id" = $1))) LEFT OUTER JOIN "comment_saved" ON (("comment"."id" = "comment_saved"."comment_id") AND ("comment_saved"."person_id" = $2))) LEFT OUTER JOIN "person_block" ON (("comment"."creator_id" = "person_block"."target_id") AND ("person_block"."person_id" = $3))) LEFT OUTER JOIN "community_block" ON (("community"."id" = "community_block"."community_id") AND ("community_block"."person_id" = $4))) LEFT OUTER JOIN "comment_like" ON (("comment"."id" = "comment_like"."comment_id") AND ("comment_like"."person_id" = $5))) LEFT OUTER JOIN "local_user_language" ON (("comment"."language_id" = "local_user_language"."language_id") AND ("local_user_language"."local_user_id" = $6))) WHERE (((((("community"."hidden" = $7) OR ("community_follower"."person_id" = $8)) AND ("local_user_language"."language_id" IS NOT NULL)) AND ("community_block"."person_id" IS NULL)) AND ("person_block"."person_id" IS NULL)) AND (nlevel("comment"."path") <= $9)) ORDER BY subpath("comment"."path", $10, $11), "comment_aggregates"."hot_rank" DESC LIMIT $12 OFFSET $13 2023-06-19 14:28:51.157 UTC [1] LOG: background worker "parallel worker" (PID 1907) exited with exit code 1 2023-06-19 14:28:51.246 UTC [1] LOG: background worker "parallel worker" (PID 1908) exited with exit code 1 2023-06-19 14:28:55.228 UTC [48] ERROR: could not resize shared memory segment "/PostgreSQL.3267719818" to 8388608 bytes: No space left on device ``` ::: Has anyone else faced this issue? One idea is that the database has grown to the point that my VPS does not have enough CPU resources to handle a common routine operation... But that does not explain to me the sudden appearance of the spikes - I would have expected a gradual increase in the size of the spikes over time.
fedilink

Woah, that’s new to me. And sounds very illegal. Are these reports credible?

EDIT: I found a discussion on the topic here, which presents some alternatives to explain why the comments appeared to be restored: https://kbin.social/m/RedditMigration/t/34112/Heads-up-Reddit-is-quietly-restoring-deleted-AND-overwritten-posts-and

I will give Reddit the benefit of the doubt because even though they are acting pretty badly, restoring user-deleted comments sounds to me like an even higher level of incompetence.


Exactly. I really enjoyed posting on reddit, but the idea that they see our comments as a trove of data to monetize at the expense of the community that created it really makes me never want to contribute again. Too bad I made the mistake of not deleting all of my comments or replacing them with junk when deleting my account :/


“Reddit represents one of the largest data sets of just human beings talking about interesting things,” Huffman said. “We are not in the business of giving that away for free.”

🤮


Ohh, I did not know that! I should add a Matrix handle then


Yeah, that is true. But:

  • A large site often collects a lot more sensitive data about a users such as phone numbers, ips, devices, activities, browser fingerprints, and even correlates accounts

  • Because of the value and quantity of the data, large sites will be attacked more often

However, a large enough data breach to be publicly exposed is not the only concern. I think that large sites pass their un-encrypted communications through filters to detect ‘illegal activities’, and in some countries ‘illegal’ can mean simply criticizing a powerful individual. Companies also use unencrypted communications to mine information that may be valuable to advertisers.

I would not be surprised to learn that an intelligence agency has the ability to search through the plaintext of all of the DMs from a big site. Sites may give this ability to intelligence agencies in oppressive governments to track the activities of politically inconvenient journalists, for example. Laws can also change, and at some point it could be made legal to search through messages to detect even minor crimes. This may be unlikely, but it is possible.

The pressures and stakes are different, but I wouldn’t trust either a big company or two guys. If it is important for you that your DMs remain private, then you should generate your own keys, encrypt messages yourself, and keep the keys safe.


Yeah, at least Lemmy tries to warn users not to use the DMs to send sensitive or private information, and suggests using a dedicated encrypted messenger instead:

The developers are very busy developing lemmy-specific features and encrypted DMs would be a lot of work to create something that is a bit redundant. But I am sure that if someone would want to make that, they would appreciate that help.

Not that I particularly trusted reddit, but at least it was 1 corporation with (hopefully) some solid security procedures in place, and potential penalties for data breaches. Whereas in Lemmy, it might just be 2 random guys.

Personally I wouldn’t trust 2,000 random guys any more than 2 random guys. I assume any of my unencrypted communications are public.


Is this for a single community? Maybe you can host a simple password-protected site where the moderators can modify the message.

@fossilesque@mander.xyz is developing a plant id bot for !plantid@mander.xyz, perhaps they can explain to you how they are doing that.


You can create a one-person instance and hold your identity there.

If you what you want is for every server to hold your identity, you have to trust all servers. I think that an evil admin would be able to impersonate any user from any instance if that were the case. How do you delete your account? Can an any admin delete your account everywhere? Which one is the real “you”?


Well, good thing that you prepared well in advance and have already built a nice alternative.

Reddit is done


Thank you! I will look into cloudfare, what people say about it, and what resources are necessary to avoid DDoS attacks without it!


Better delivery and avoids exposing your IP via emails, although it’s best to setup a some sort of tunnel to avoid having that problem altogether.

Is it possible to have a public-facing instance without exposing your IP? I am not sure I understand that part, and I am very interested in understanding how to achieve that.


consider using an email delivery service like jetmail instead of sending mail directly from the instance

Why is this better? To overcome spam filters, or is there some security risk associated with e-mails?


It looks like that specific community has not been fetched yet.

Try searching for: !hardcore@lemmy.ml

This should fetch the community.


Adding to this - when you want to fetch content into your instance, you should search for the url that you get when you click the right button.


[Question] How are reports federated?
If a user from my instance reports a post that lives in a different instance, I am able to see that report. But it is not clear to me whether admins and mods from the original instance get that same report. I am curious, because in some cases the reported content might not be bad enough for me to take action on my side, but the other instance's moderators might be interested in looking at it themselves. I also don't know whether 'resolving' the report on my side will resolve it on their side as well, or if we can act on the report independently.
fedilink

I am curious about why lemmy.ml is blocked in your country. Is the ‘ml’ domain generally blocked? Or was lemmy.ml specifically added to some block list?