![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
I have no problem supporting devs but locking what should be core features behind a paywall in unacceptable for me.
I have no problem supporting devs but locking what should be core features behind a paywall in unacceptable for me.
I mean software that’s actively being developed can’t be called DOA. Even if it’s garbage now (and I don’t know if it is) doesn’t mean it can’t become useful at a future date.
Its not like a TV show where once released it can never be changed.
Oh never mind, I saw this finding announcement for 6M and assumed it was the same company. Looks like they have many corporate investors…doesn’t inspire too much confidence.
Although they are still using the Apache 2 license and you can see they are very active in github. It does look like it’s a good FOSS project from the surface.
Ya it was bought by kiteworks which provides document management services for corps (which explains why that mention traceable file access in their features a lot).
That being said, they bought them in 2014 it seems and it’s been a decade now
Correcting: they were bought very recently, they have been accepting corporate funding for more than a decade however. That’s not bad in and of itself.
Thank your for providing first hand perspective. I’ll probably try to spin up a docker deployment for testing.
I don’t really plan to use many of the plugins since I think that was the down fall of NextCloud. Trying to do everything instead of doing it’s core job well.
Also looking through some of the issues and comments on github about no plans to implement basic features (file search on the android app) does not inspire confidence at all. One of the reasons I’m hoping the OwnCloud rewrite is good.
Did not know this. Thanks!
Looks like Kiteworks invested in OwnCloud in 2014 and they still seems to be going strong with the OSS development which is a good sign.
This probably explains why there are so many active devs on the project and how they got a full rewrite into version 4 relatively quickly.
Already seems to have more features than Seafile.
I know, I did as well.
The point of the post is that there is a very active full rewrite of the whole thing trying to ditch all the tech debt that NextCloud inherited from the OG owncloud (php, Apache etc)
I had NextCloud on a Ryzen 3600 with NVME zfs array. While faster that my previous Intel atom with HDD + SSD cache, Seafile blows it away in terms of speed and resiliency. It feels much more reliable with updates etc.
Exactly, Seafile is the best I’ve found so far but a clean re write of the basic sync features would be great.
Seafile for example has full text search locked behind a paywall even though tools like Elasticsearch could be integrated into it for free. Even the android app as filename search locked behind a paywall. You have to log into the website on your phone if you need to search.
Pathetic state of affairs.
When I was starting out I almost went down the same pathway. In the end, docker secrets are mainly useful when the same key needs to be distributed around multiple nodes.
Storing the keys locally in an env file that is only accessible to the docker user is close enough to the same thing for home use and greatly simplifies your setup.
I would suggest using a folder for each stack that contains 1 docker compose file and one env file. The env file contains passwords, the rest of the env variables are defined in the docker compose itself. Exclude the env files from your git repo (if you use this for version control) so you never check in a secret to your git repo (in practice I have one folder for compose files that is on git and my env files are stored in a different folder not in git).
I do this all via portainer, it will setup the above folder structure for you. Each stack is a compose file that portainer pulls from my self hosted gitea (on another machine). Portainer creates an env file itself when you add the env variables from the gui.
If someone gets access to your system and is able to access the env file, they already have high level access and your system is compromised regardless of if you have the secrets encrypted via swarm or not.
True, but the downside of cloudflare is that they are a reverse proxy and can see all your https traffic unencrypted.
I like finamp as my android music client for jellyfin
I world strongly suggest a second device like an RPI with Gitea. There what I have.
I use portainer to pull straight from git and deploy
Not to mention the advantage of infrastructure as code. All my docker configs are just a dozen or so text files (compose). I can recreate my server apps from a bare VM in just a few minutes then copy the data over to restore a backup, revert to a previous version or migrate to another server. Massive advantages compared to bare metal.
Yes, you should use something that makes sense to you but ignoring docker is likely going to cause more aggravation than not in the long term.
There is an issue with your database persistence. The file is being uploaded but it’s not being recorded in your database for some reason.
Describe in detail what your hardware and software setup is, particularly the storage and OS.
You can probably check this by trying to upload something and then checking the database files to see the last modified date.
Thanks! Makes sense if you can’t change file systems.
For what it’s worth, zfs let’s you dedup on a per dataset basis so you can easily choose to have some files deduped and not others. Same with compression.
For example, without building anything new the setup could have been to copy the data from the actual Minecraft server to the backup that has ZFS using rsync or some other tool. Then the back server just runs a snapshot every 5 mins or whatever. You now have a backup on another system that has snapshots with whatever frequency you want, with dedup.
Restoring an old backup just means you rsync from a snapshot back to the Minecraft server.
Rsync only needed if both servers don’t have ZFS. If they both have ZFS, send and recieve commands are built into zfs are are designed for exactly this use case. You can easily send a snap shot to another server if they both have ZFS.
Zfs also has samba and NFS export built in if you want to share the filesystem to another server.
I use zfs so not sure about others but I thought all cow file systems have deduplication already? Zfs has it turned on by default. Why make your own file deduplication system instead of just using a zfs filesystem and letting that do the work for you?
Snapshots are also extremely efficient on cow filesystems like zfs as they only store the diff between the previous state and the current one so taking a snapshot every 5 mins is not a big deal for my homelab.
I can easily explore any of the snapshots and pull any file from and of the snapshots.
I’m not trying to shit on your project, just trying to understand its usecase since it seems to me ZFS provides all the benefits already
I only read the beginning but it says you can use it for private deployments but can’t use it commercially. Seems reasonable. Any specific issues?