[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [gcmd-usr] Need help on Ubuntu 20.04
From: |
kht-lists |
Subject: |
Re: [gcmd-usr] Need help on Ubuntu 20.04 |
Date: |
Tue, 09 Mar 2021 01:51:21 +0000 |
Hello again Greg,
I think I misunderstood. Are you running gnome-commander on the same machine as
the drives are installed? If so that should be much simpler. I connect to my
servers across my network. For local file systems the bookmarks approach is
probably the best workaround. Ctrl-D will bring up the bookmarks dialog.
For what it is worth, here is how I have some "devices" configured in my older
version of g-c on CentOS 7. They are simply pointing to file systems just like
your video1, video2 etc. Try this simple setup. It might work. Sorry I cannot
get it as an in-line image. It is attached.
I don't have g-c on an Ubuntu 20.04 physical machine at the moment. I will try
and get one setup with the latest g-c and see what I can see.
Ken
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Monday, March 8, 2021 7:46 PM, Greg <gregl@nycap.rr.com> wrote:
> Hi Ken,
>
> Thanks for the detailed e-mail.. I struggle when I have to do
> things out of the ordinary on linux. My drives are mounted through
> Fstab. I have been running it this way for well over 15 years and it has
> served me well. Here is a copy and paste of my fstab.
>
> / was on /dev/sde4 during installation
>
> =======================================
>
> UUID=68047441-d694-4eac-a885-97a80113738a / ext4
> errors=remount-ro 0 1
>
> /boot was on /dev/sde2 during installation
>
> ===========================================
>
> UUID=b61d9c4e-7889-4202-bd20-c1a1f9e58629 /boot ext4
> defaults 0 2
>
> /boot/efi was on /dev/sde1 during installation
>
> ===============================================
>
> UUID=DD3D-010F /boot/efi vfat umask=0077 0 1
>
> /video1 was on /dev/sdb1 during installation
>
> =============================================
>
> UUID=bca503ec-434e-46a9-91f1-950fa2af6c2a /video1 ext4
> defaults 0 2
>
> /video2 was on /dev/sdc1 during installation
>
> =============================================
>
> UUID=e8991822-93a7-4e3f-980f-3d1c522c5134 /video2 ext4
> defaults 0 2
>
> /video3 was on /dev/sdg1 during installation
>
> =============================================
>
> UUID=cfd43643-335a-45e9-ba83-532b38ecd859 /video3 ext4
> defaults 0 2
>
> /video4 was on /dev/sdi1 during installation
>
> =============================================
>
> UUID=6c905cc7-b8d8-4747-9be3-f728005aa4e0 /video4 ext4
> defaults 0 2
>
> /video5 was on /dev/sde5 during installation
>
> =============================================
>
> UUID=df618a78-761e-444e-a38b-50ac5b1eb7de /video5 ext4
> defaults 0 2
>
> /video6 was on /dev/sdf1 during installation
>
> =============================================
>
> UUID=a9bc9128-3569-4c93-9106-2df70ee17b7e /video6 ext4
> defaults 0 2
>
> /video7 was on /dev/sdh1 during installation
>
> =============================================
>
> UUID=b57106e5-3767-4c37-8944-7c9d823acd2e /video7 ext4
> defaults 0 2
>
> /video8 was on /dev/sda1 during installation
>
> =============================================
>
> UUID=7f85bb49-8e17-421f-a2c6-6614e9bf22f2 /video8 ext4
> defaults 0 2
>
> /video9 was on /dev/sdj1 during installation
>
> =============================================
>
> UUID=6ab3b7b8-aa93-4dbc-9f24-7123b9108610 /video9 ext4
> defaults 0 2
>
> swap was on /dev/sde3 during installation
>
> ==========================================
>
> UUID=3c74c1b1-7654-4d5f-aa13-6011041b4289 none swap
> sw 0 0
>
> It runs 24/7.. All nine drives are on the same computer,nothing too
> extreme...
>
> I do have the drives bookmarked and that works ok,but not as nice as the
> devices buttons.. I did get G-C working using devices,but as soon as I
> quit the program it lost all the mappings.. When I read you were working
> on a snap for GC I figured that was the answer. I tried several other
> commander type programs,but they aren't GC..
>
> Thanks again, Greg
>
> On 3/8/21 3:38 PM, kht-lists via gcmd-users wrote:
>
> > Hello Greg,
> > How are you sharing the drives in your media server (nfs, samba, something
> > else)? I have a somewhat similar situation and this is how I handle it.
> > I have 3 data servers. Each one has drives in pairs which I keep mirrored.
> > The "a" drives in each pair are exported with nfs. The b drive in each pair
> > is the backup. As I do not need access to the data at all times, I only
> > bring up a sever when I need it. This adds a little wrinkle which you might
> > not have. If I mount an nfs export from the Mate desktop (on CentOS 7 in
> > this case) the file managment system "caja" goes berserk if the server
> > shuts down with the mfs export still mounted. I therefor have server-on and
> > server-off scripts which I run as needed.
> > On my CentOS workstation I have the following directory structure to access
> > the server data:
> > /data/
> > /data/servers/
> > /data/servers/data14.1/
> > /data/servers/data14.2/
> > /data/servers/data14.3/
> > /data/servers/data18.1/
> > /data/servers/data18.2/
> > /data/servers/data22.1/
> > /data/servers/data22.2/
> > These are the mount points for drive 1a on server t14, 2a on server t14
> > etc. Here is the script to mount server t14:
> > #!/bin/bash
> > sudo mount t14:/media/data14.1a /data/_servers/data14.1
> > sudo mount t14:/media/data14.2a /data/_servers/data14.2
> > sudo mount t14:/media/data14.3a /data/_servers/data14.3
> > echo mounting finished
> > sleep 10
> > exit
> > And the unmount script:
> > #!/bin/bash
> > sudo umount --force -vvv /data/_servers/data14.1
> > sudo umount --force -vvv /data/_servers/data14.2
> > sudo umount --force -vvv /data/_servers/data14.3
> > echo un-mounting finished
> > sleep 10
> > exit
> > You could create a shortcut in Gnome-Commander to access the top of the
> > mount tree /data/servers or even a shortcut for each drive. As g-c is not
> > attempting to make the connection - and thus not using gnome-vfs - I would
> > not expect any issues. This is not quite as handy as "device" buttons but...
> > If your media server is on-line essentially full time you might wish to
> > look into autofs. It will automatically mount nfs exports when accessed.
> > However, I have had bad luck using it when the servers are not on-line.
> > I have given myself NOPASSWD access to the sudo mount and umount commands
> > in visudo so I am not bothered by being asked for a password each time I
> > run a script. Ask if you need help with that. If you are using Samba (smb)
> > to share the drives you could probably setup something similar on your
> > Ubuntu 20.04 machine. It has been a while since I have done that.
> > Ken
> > I don't have a Swiss bank account but I do have a Swiss email address :-)
> > Sent with ProtonMail Secure Email.
> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > On Monday, March 8, 2021 10:06 AM, Greg gregl@nycap.rr.com wrote:
> >
> > > Thanks for the advice. I may take me a while to try them.. The devices I
> > > am trying mount are nine Hard drives,each one is named Video 1-Video9.
> > > This is on my media server.. I hope I can get it to work. I have
> > > resisted up grading to Ubuntu 20.04,because of GC. If I can't get it
> > > going,I may return to Ubuntu 18.04.. I have tried several commander type
> > > file managers,but nothing compares to GC..
> > > Thanks for the suggestions..
> > > Greg
> > > On 3/7/21 4:43 PM, mi wrote:
> > >
> > > > I'm also misusing the device buttons as graphical bookmarks by creating
> > > > 'pseudo-devices', which means, instead of the previous mount command,
> > > > i'll insert into the 'device' field just a
> > > >
> > > > > /dev/null && cd
> > > > > and it will change directory into what is configured as (not really
> > > > > a) mountpoint.
> > > > > For example, insert '/tmp' there and you'll have a fast switch to the
> > > > > temp folder.
> > > >
> > > > gcmd-users mailing list
> > > > gcmd-users@nongnu.org
> > > > https://lists.nongnu.org/mailman/listinfo/gcmd-users
> > > > gcmd-users mailing list
> > > > gcmd-users@nongnu.org
> > > > https://lists.nongnu.org/mailman/listinfo/gcmd-users
> >
> > gcmd-users mailing list
> > gcmd-users@nongnu.org
> > https://lists.nongnu.org/mailman/listinfo/gcmd-users
>
> gcmd-users mailing list
> gcmd-users@nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gcmd-users
device.png
Description: PNG image
- [gcmd-usr] Need help on Ubuntu 20.04, Greg, 2021/03/07
- Re: [gcmd-usr] Need help on Ubuntu 20.04, Uwe Scholz, 2021/03/07
- Re: [gcmd-usr] Need help on Ubuntu 20.04, mi, 2021/03/07
- Re: [gcmd-usr] Need help on Ubuntu 20.04, mi, 2021/03/07
- Re: [gcmd-usr] Need help on Ubuntu 20.04, Greg, 2021/03/08
- Re: [gcmd-usr] Need help on Ubuntu 20.04, kht-lists, 2021/03/08
- Re: [gcmd-usr] Need help on Ubuntu 20.04, Greg, 2021/03/08
- Re: [gcmd-usr] Need help on Ubuntu 20.04,
kht-lists <=
- Re: [gcmd-usr] Need help on Ubuntu 20.04, Greg, 2021/03/09
- Re: [gcmd-usr] Need help on Ubuntu 20.04, Frank Lehmann, 2021/03/09
- Re: [gcmd-usr] Need help on Ubuntu 20.04, Greg, 2021/03/09
Re: [gcmd-usr] Need help on Ubuntu 20.04, mi, 2021/03/07