Oh, I had not noticed that page. I was hoping getting the NiDAQmx (https://www.ni.com/en/support/downloads/drivers/download.ni-daq-mx.html#494543) driver installed would be enough. So this means that even if I succeed it might still not support it 😅
I was quite naive when selecting this card. I knew nothing about PCIe and I figured it would be a very simple matter to read out the values…
I just found some videos about writing PCIe drivers from scratch, but since I know nothing about PCIe I have no idea about the level of complexity that it would take to reverse-engineer. I suspect that it might be a very difficult thing to do, maybe even practically impossible.
My view is: I don’t like this cultural element, and I am glad that I live in a country without it. But if I am a visitor from abroad I would not resist the local culture and try to impose my own values. If I am aware of this cultural element and I dislike it, my options would be to either avoid restaurants and other tipping situations as much as I can, or simply account for the tip when making my financial decisions, and pay it.
If I live in the country then it is different, because then I am more entitled to be a driver of change. Personally, my approach would be to support businesses with explicit no-tipping policy, and to refuse receiving tips myself.
I have just tested by uploading/re-downloading an image, and the EXIF data is removed.
I then looked through the Lemmy issues and found this issue related to the image-uploading back-end (pict-rs) removing the EXIF data. In response to this issue, the developer of pict-res (asonix) comments that striping the EXIF data was one of the original motivations for building the uploader.
I am not sure about how to search through the source code of pict-rs, and it seems like this step is not properly documented in the readme file, so I have not been able to find exactly where the metadata removal operation takes place. I think that this is done by invoking ‘exiftool’.
I would kill. 2X growth rate is too fast, and it is easily better 100 random people now than 200 immediately after.
What about these rules?
The group of people in the tracks is randomized every time.
The group always includes the person that the current decision maker loves the most.
The choice is to kill, or to increase the number of people in the kill group by one.
If the number of humans available reaches the population number, everyone dies.
The list of every decision made by every decision maker is public knowledge.
You are the first decision maker.
In the admin settings there is a tab that allows admins to add an icon. So far I added :lemmy_hearts: and :mander: to test. I am posting from Jerboa now and the custom icons did not get invoked. I don’t know yet if you can also invoke them from a different instance.
What I did notice is that the icon gets fixed to a size that is bigger than a normal icon, and the upload button doesn’t seem to be working - i had to feed the form the direct url to an image.
EDIT: Sorry, I misunderstood this question ~~ I have a raspberry pi connected to a 1 TB SSD. This has the following cron job:
00 8 * * * /usr/bin/bash /home/user/backup/backup.sh
And the command in backup.sh is:
rsync --bwlimit=3200 -avHe ssh user@instance-ip:/var/www/mander/volumes /home/user/backup/$(date | awk '{print $3"-" $2 "-"$6}')
In my case, my home network has a download speed of 1 Gbps, and the server has an upload speed of 50 Mbps, so I use -bwlimit=3200 to limit the download to 25.6 Mbps, and prevent over-loading my server’s bandwidth.
So every morning at 8 am the command is run and a full backup copy is created.
It seems that you have a different problem than me. In your case, rather than doing a full copy like me, you can do incremental backups. The incremental backup is done by using rsync to synchronize the same folder - so, instead of the variable folder name $(date | awk ‘{print $3"-" $2 “-”$6}’), you can simply call that instance_backup. You can copy the folder locally after syncronizing if you would like to keep a record of backups over a period of a few days.
On a second thought, I would also benefit from doing incremental backups and making the copies locally after synchronizing… ~~
Not super easily. It can be done by querying the postgresql dabase, but there is no built-in method to do it using the browser interface at the moment. When anyone from any instance does report them, you will see the report.
Someone please correct me if I am wrong. But, as far as I am aware, if you purge a user from your instance, that action is federated to every other instance - so if you respond quickly to these reports, other instance’s admins will not need to deal with them themselves. It is only when you perform an action on a user from a different instance that the action is only local.
It may be an AI, or it can also be a real human that is lying. The point of the application filter is to significantly slow down these approaches to bring their impact to a more manageable level. An automated AI bot will not be able to perform much better than a human troll with some free time because any anomalous registration patterns, including registration spikes and periodicity, are likely to be detected by the much more powerful processor that resides in the admin’s head.
On the other hand, a catch-all domain e-mail, a VPN with a variable IP, and a captcha-defeating bot can be used to generate thousands of accounts in a very short amount of time. Without the application filter the instance is vulnerable to these high-throughput attacks, and the damage can be difficult to fix.
It is too easy to fake e-mails. You can set up a catch-all e-mail domain and spam the registration like that. I am not a fan of giving my e-mail nor collecting other people’s e-mails.
My current message contains the following:
Please leave a short message (a sentence or two is enough) stating why you would like to join this instance and I will accept your application as soon as possible. The purpose of this form is to filter out spam bots, not to judge your motivation for joining.
It is not about them writing an essay to be let in. It is a very effective strategy to weed out spam accounts being registered in masse. One step is to make sure that the user made a cohesive sentence that addressees the question, and the other step is to check whether there is a sudden spike of similar new applications. Even ignoring the actual text, it is useful to be able to monitor whether you getting rate-limited bursts of account creations, and having the ability to approve/deny allows you to respond with less effort than if they succeed at creating the accounts.
The spikes disappeared after I increased the RAM from 2 GB to 3 GB, and they have not re-appeared over the past few hours.
It appears like some some process was hitting the 2GB RAM limit - even though under normal use only about 800GB of RAM are allocated. At first I thought that the high amount of read IOPS might be due to the swap memory kicking into action, but the server has no allocated swap.
The postgresql container appears to fail when all of the RAM is used up, and it may be that the high CPU usage is somehow related to repopulating the dabase as it is restarted… But I would think that if this were the case I would see similar spikes whenever I reboot - and I don’t.
Conclusion: I am not sure why this happens.
But if anyone else notices these spikes it may be a RAM issue. Try increasing the RAM and see if they go away.
It makes it a little bit easier to do, but it is not difficult to replicate this effect without changing the URL in the title - using a redirected URL and changing the redirect address, for example.
I think that this small increase in the way this kind of attack can be delivered is more than counter-balanced by the convenience of having editable titles.
Woah, that’s new to me. And sounds very illegal. Are these reports credible?
EDIT: I found a discussion on the topic here, which presents some alternatives to explain why the comments appeared to be restored: https://kbin.social/m/RedditMigration/t/34112/Heads-up-Reddit-is-quietly-restoring-deleted-AND-overwritten-posts-and
I will give Reddit the benefit of the doubt because even though they are acting pretty badly, restoring user-deleted comments sounds to me like an even higher level of incompetence.
Exactly. I really enjoyed posting on reddit, but the idea that they see our comments as a trove of data to monetize at the expense of the community that created it really makes me never want to contribute again. Too bad I made the mistake of not deleting all of my comments or replacing them with junk when deleting my account :/
Yeah, that is true. But:
A large site often collects a lot more sensitive data about a users such as phone numbers, ips, devices, activities, browser fingerprints, and even correlates accounts
Because of the value and quantity of the data, large sites will be attacked more often
However, a large enough data breach to be publicly exposed is not the only concern. I think that large sites pass their un-encrypted communications through filters to detect ‘illegal activities’, and in some countries ‘illegal’ can mean simply criticizing a powerful individual. Companies also use unencrypted communications to mine information that may be valuable to advertisers.
I would not be surprised to learn that an intelligence agency has the ability to search through the plaintext of all of the DMs from a big site. Sites may give this ability to intelligence agencies in oppressive governments to track the activities of politically inconvenient journalists, for example. Laws can also change, and at some point it could be made legal to search through messages to detect even minor crimes. This may be unlikely, but it is possible.
The pressures and stakes are different, but I wouldn’t trust either a big company or two guys. If it is important for you that your DMs remain private, then you should generate your own keys, encrypt messages yourself, and keep the keys safe.
Yeah, at least Lemmy tries to warn users not to use the DMs to send sensitive or private information, and suggests using a dedicated encrypted messenger instead:
The developers are very busy developing lemmy-specific features and encrypted DMs would be a lot of work to create something that is a bit redundant. But I am sure that if someone would want to make that, they would appreciate that help.
Not that I particularly trusted reddit, but at least it was 1 corporation with (hopefully) some solid security procedures in place, and potential penalties for data breaches. Whereas in Lemmy, it might just be 2 random guys.
Personally I wouldn’t trust 2,000 random guys any more than 2 random guys. I assume any of my unencrypted communications are public.
Is this for a single community? Maybe you can host a simple password-protected site where the moderators can modify the message.
@fossilesque@mander.xyz is developing a plant id bot for !plantid@mander.xyz, perhaps they can explain to you how they are doing that.
You can create a one-person instance and hold your identity there.
If you what you want is for every server to hold your identity, you have to trust all servers. I think that an evil admin would be able to impersonate any user from any instance if that were the case. How do you delete your account? Can an any admin delete your account everywhere? Which one is the real “you”?
Better delivery and avoids exposing your IP via emails, although it’s best to setup a some sort of tunnel to avoid having that problem altogether.
Is it possible to have a public-facing instance without exposing your IP? I am not sure I understand that part, and I am very interested in understanding how to achieve that.
Thank you for your hard work!!
I appreciate that you going through this test period. I hope it all goes smoothly and that at least a few hairs remain on your heads by the end of this week. Good luck!