- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
What’s the point of primary and secondary backups if they can be accessed with the same credentials on the same network
They weren’t normally on the same network, but were accidentally put on the same network during migration.
What’s the correct way to implement it so that it can still be automated? Credentials that can write new backups but not delete existing ones?
I don’t know if it is the „correct“ way but I do it the other way around. I have a server and a backup server. Server user can‘t even see backup server but packs a backup, backup server pulls the data with read only access, main server deletes backup, done.
deleted by creator
Neat! Thanks for mentioning it!
For an organisation hosting as many companies data as this one I’d expect automated tape at a minimum. Of course, if the attacker had the time to start messing with the tape that’s lost as well but it’s unlikely.
It depends what’s the pricing. For example ovh didn’t keep any extra backup when their datacenter took fire. But if a customer paid for backup, it was kept off-site and was recovered
It might be even pretending to be a big hosting company when they’re actually renting a dozen deds from a big player, much cheaper than maintaining a data center with 99.999% uptime
Fundamentally there’s no need for the user/account that saves the backup somewhere to be able to read let alone change/delete it.
So ideally you have “write-only” credentials that can only append/add new files.
How exactly that is implemented depends on the tech. S3 and S3 compatible systems can often be configured that data straight up can’t be deleted from a bucket at all.
A tape library that uses a robot arm https://youtu.be/sYgnCWOVysY?t=30s
Backups that are not connected to any device are not susceptible to being overwritten and encrypted by malware.
A tape library that uses a robot arm
https://youtu.be/sYgnCWOVysY?t=30sOr like that vault in Rogue One?
Here is an alternative Piped link(s): https://piped.video/sYgnCWOVysY?t=30s
https://piped.video/sYgnCWOVysY?t=30s
https://piped.video/1RUWtaOzVPg
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Here is an alternative Piped link(s): https://piped.video/sYgnCWOVysY
https://piped.video/sYgnCWOVysY
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
i use immutable objects on backblaze b2
from command line using their tool is something like
b2 sync SOURCE BUCKET
and from the bucket setting disable object deletion
also borgbase allows this, backups can be created but deletions/overwrites are not permanent (unless you enabled them)
Time and time again, data hosting providers are proving that local backups not connected to the internet are way better than storing in the cloud.
The 3-2-1 backup strategy: “Three copies are made of the data to be protected, the copies are stored on two different types of storage media and one copy of the data is sent off site.”
How would that work in practice? 1 medium offsite, and 2 mediums on-premises?
Exactly.
This is the way.
Any redundant backup strategy uses both. They both have inherent data loss risks. Local backups are great, but unless you store them in a bunker they are still at risk to fire, theft, vandalism and natural disasters. A good backup strategy stores copies in at least three locations. Local, off-site and the cloud. Off-site backups are backups you can physically retrieve. Like tapes stored in a vault in another city.
deleted by creator
How are you using that 7 port USB hub?
deleted by creator
Oh ok. So you’re using them effectively like cold storage backups? I was scared you were going to tell me that you were running an ZFS pool off a USB hub, lol.
deleted by creator
I dunno about that. If you actually were using a USB hub for ZFS, then I have a 10 petabyte flash drive to sell you.
deleted by creator
The only downside to something like this would be electrical surges if you leave the drives plugged.
Now that you mention fucking incompetence, I need to verify my 3-2-1 backup strategy is correctly implemented. Thanks for the reminder, CloudNordic and AzeroCloud!
They had one job
People literally pay these guys to not screw up this one thing.
Danish hosting firms CloudNordic and AzeroCloud have suffered ransomware attacks, causing the loss of the majority of customer data and forcing the hosting providers to shut down all systems, including websites, email, and customer sites.
Other people’s computers. Never forget.
I feel really bad for everyone involved - customers and staff. The human cost in this is huge.
Yes, there’s a lot of criticism of backup strategies here, but I bet most of us who deal with this professionally have knowledge of systems that would also be vulnerable to malicious attack, and that’s only the shortcomings we know about. Audits and pentesting are great, but not infallable and one tiny mistake can expose everything. If we were all as good as we think we are, ransomware wouldn’t be a thing.
I think that people generally overestimate how much money tech companies like this one actually make. Their profits are tiny. A lot of the time, tech companies run on investment money, and can’t actually turn a profit. They wait for the big acquisition or IPO payday. So if you think you’re actually gonna get 100k off them, good luck. Sometimes they’re barely keeping the lights on.
Put all the data in the cloud, they said. It will all be save and handled by professionals!
That’s what you call an epic blunder.
It is a company destroying blunder.
I think they’re aware of that
Martin Haslund Johansson, the director of Azerocloud and CloudNordic, stated that he does not expect customers to be left with them when the recovery is finally completed.
The customers are already lost:
-
pay the expensive ransom, if the bad actor gives them the decryption key, customers are relieved but still pissed, will take the data and move to somewhere else with a big FO. Go out of business.
-
don’t pay the ransom, customers are pissed and move to somewhere else with a big FO. Go out of business.
-
If you fuck up that badly you shouldn’t be allowed to operate in that industry.
Problem is that you have to work in the industry to fuck up that badly.
They’re a small company, they’ll probably just go bankrupt.
How is that even possible? What kind of hosting company runs in a way that they would lose all the data with ransomware?
Sounds like they had all their backups online, instead of keeping offline copies. It’s a reminder that everyone needs at least one backup that isn’t connected to any computer. It’s also a reminder that “the cloud” should not be the only place you keep your data, because hosting providers are targets for this stuff and you don’t know how careful they are.
deleted by creator
I wonder why they can’t/won’t pay.