Wednesday, January 17, 2018

yum shell - bat out of dependency hell

There's evil in the air and there's thunder in sky
(Meatloaf "Bat out of hell")

# yum install foo
Error: foo conflicts with bar

Again I have had the pleasure of having dependencies between RPM-packages ending my attempt to install a single package with a suggestion of removing core packages. I think this most often happen with Mysql or Percona packages, but I am sure MariaDB will be able to give you the same situation too. It's not the first time I have been here..

[root@ftp01-prod ~]# yum install Percona-Server-client-57
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package Percona-Server-client-57.x86_64 0:5.7.20-19.1.el7 will be installed
--> Processing Dependency: Percona-Server-shared-57 for package: Percona-Server-client-57-5.7.20-19.1.el7.x86_64
--> Running transaction check
---> Package Percona-Server-shared-57.x86_64 0:5.7.20-19.1.el7 will be installed
--> Processing Dependency: Percona-Server-shared-compat-57 for package: Percona-Server-shared-57-5.7.20-19.1.el7.x86_64
--> Running transaction check
---> Package Percona-Server-shared-compat-57.x86_64 0:5.7.20-19.1.el7 will be installed
--> Processing Conflict: Percona-Server-shared-compat-57-5.7.20-19.1.el7.x86_64 conflicts Percona-Server-shared-56
--> Finished Dependency Resolution
Error: Percona-Server-shared-compat-57 conflicts with Percona-Server-shared-56-5.6.38-rel83.0.el7.x86_64
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
[root@ftp01-prod ~]#

So if I want to install Percona-Server-client-57 I have to install Percona-Server-shared-compat-57 too, and that I can't because of the already installed Percona-Server-shared-56. OK, so I will just remove Percona-Server-shared-56 and then install Percona-Server-shared-compat-57 before doing the install I first tried to do:

[root@ftp01-prod ~]# yum remove Percona-Server-shared-56
Dependencies Resolved

 Package                           Arch            Version                      Repository                 Size
 Percona-Server-shared-56          x86_64          5.6.38-rel83.0.el7           @percona-release          3.4 M
Removing for dependencies:
 MySQL-python                      x86_64          1.2.5-1.el7                  @centos_os                284 k
 fail2ban                          noarch          0.9.7-1.el7                  @epel                     0.0
 fail2ban-sendmail                 noarch          0.9.7-1.el7                  @epel                      11 k
 perl-DBD-MySQL                    x86_64          4.023-5.el7                  @centos_os                323 k
 postfix                           x86_64          2:2.10.1-6.el7               @centos_os                 12 M
 redhat-lsb-core                   x86_64          4.1-27.el7.centos.1          @anaconda                  45 k

Transaction Summary
Remove  1 Package (+6 Dependent packages)

Installed size: 16 M
Is this ok [y/N]:

Very much not OK. I'd like to at least keep things with descriptions like this

Description : The Linux Standard Base (LSB) Core module support
: provides the fundamental system interfaces, libraries,
: and runtime environment upon which all conforming
: applications and libraries depend.

The problem seems to be that postfix need since both provide which is provided by both Percona-Server-shared-56 and Percona-Server-shared-compat-57. So I just need to swap the former for the latter, and then I can run my original install.

OK, so I both want to remove a package and install a package. And I want to do it at the same time, so that I don't have to remove things like redhat-lsb-core. Did you notice the use of the word transaction in "Transaction Summary" from yum? A transaction is actually what I want. Luckily yum provides a way of doing this, and have probably done since forever, but I didn't learn about it till today. And as so many times before, it is a shell that solves our problems:

[root@ftp01-prod ~]# yum shell
Loaded plugins: fastestmirror, priorities
> remove Percona-Server-shared-56
> install Percona-Server-shared-compat-57
Loading mirror speeds from cached hostfile
> run
--> Running transaction check
---> Package Percona-Server-shared-56.x86_64 0:5.6.38-rel83.0.el7 will be erased
---> Package Percona-Server-shared-compat-57.x86_64 0:5.7.20-19.1.el7 will be installed
--> Finished Dependency Resolution

 Package                             Arch       Version                Repository            Size
 Percona-Server-shared-compat-57     x86_64     5.7.20-19.1.el7        percona-release      1.2 M
 Percona-Server-shared-56            x86_64     5.6.38-rel83.0.el7     @percona-release     3.4 M

Transaction Summary
Install  1 Package
Remove   1 Package

Total download size: 1.2 M
Is this ok [y/d/N]:

Yes, very much thank you! And then finally:

[root@ftp01-prod ~]# yum install Percona-Server-client-57
Dependencies Resolved

 Package                        Arch         Version                  Repository             Size
 Percona-Server-client-57       x86_64       5.7.20-19.1.el7          percona-release       7.2 M
Installing for dependencies:
 Percona-Server-shared-57       x86_64       5.7.20-19.1.el7          percona-release       747 k

Transaction Summary
Install  1 Package (+1 Dependent package)

Total download size: 7.9 M
Installed size: 41 M
Is this ok [y/d/N]:y

Done and done :-)

Friday, August 29, 2014

Copying permissions and ownership of files

One of the big joys I find in working with Linux and Unix systems is that there is always something new I can learn, even with tools I have been using for over 15 years.

Today I have been working for one of my customers on a script that for a bunch of files matching a glob will read each file, process it, and generate some output into a new file.

I need the output files to have the same permissions and ownership as their respective sources, and started to look into different more or less elaborate ways of doing that. But then it turns out that this is a very well solved problem already. Both chmod and chown have an option for this:
              use RFILE's owner and group rather than specifying OWNER:GROUP values

So in my script have added two lines, and now the permissions and ownership are copied onto the new files

    chmod ${DSTDIR}/${LOGFILE} --reference=${LOGFILE}
    chown ${DSTDIR}/${LOGFILE} --reference=${LOGFILE}

I guess these options have been there for years and years, waiting for me:-)

Wednesday, February 8, 2012

Adventures in bash - catching several exit values in a piped set of commands

"All in all, very odd, bash continues to be the most bizarre of languages, convoluted, twisted, but with strange solutions thrown in just when you are about to give up hope entirely." (forum post at Techpatterns)

Yesterday I was re-working a database backup script at one of my customers and stumbled onto a problem when I wanted to have both proper error handling and at the same time avoid filling the disk.

The code providing the challenge was this
I need to pipe the output of mysqldump to gzip, because otherwise I run into problems with the disk filling up. And yes, having to it like this also means that doing restores are quite a pain, but that is another problem.

Normally I do error handling in scripts by evaluating $?, but to have proper error handling in here I need to capture the exit value of both mysqldump and gzip.  And $? only gives med the exit value of gzip - the least important of the two.

Luckily, and as expected, I'm not the first person to run into this problem, ad by way of googling I found that Bash actually have a built-in way of giving me both exit values - the array $PIPESTATUS. $PIPESTATUS is an array with all the exit values from you last command line. $PIPESTATUS[0] contains the first exit value, $PIPESTATUS[1] the second and so on
sigurdur@ifconfig:~$ true | false
sigurdur@ifconfig:~$ echo ${PIPESTATUS[0]}
sigurdur@ifconfig:~$ true | false
sigurdur@ifconfig:~$ echo ${PIPESTATUS[1]}
You can also get the entire array
          sigurdur@ifconfig:~$ true | false |false |true
          sigurdur@ifconfig:~$ echo ${PIPESTATUS[@]}
          0 1 1 0
A single, non-piped command is considered to be a "pipe of one", thus leaving you with a $PIPESTATUS array with one value. Since $PIPESTATUS is updated after every command line I had to copy the array before extracting the exit values.
So my code ended up like this:
# We want the exit values of both mysqldump and gzip
PIPESTATUS have probably been part of Bash since forever, but to me it was new - and it solved my problem. Fun stuff:-)

Friday, January 7, 2011

When the radius server came to a grinding halt - and how we brought it back

One of my customers have a few hundred pieces of hardware spread around the country that are authenticated and get their IP addresses from a radius server ( we run. In between the few hundred pieces of hardware and the radius server there is another server that among other things handles the lookups against the radius server  and then gives the clients their connection information.

This have been running for four years with no problems what so ever, handeling a handful or so authentications a minute.

Yesterday the intermediate server suddenly booted (we learned this later), and that led to a need reinitialize the network connections for all the little pieces of hardware (thus authenticating them and looking up their IPs) - all at once. The load on the radius server went through the roof, and we had large amounts of timeouts on the lookups and the radius server came to a grinding halt. Most of the little pieces of hardware didn't get their IP's and the customer's monitoring system went all red and no-one was enjoying them selves much.
A trivial radius server should be able to handle at least a few thousand requests a second, so we were a bit boggled by this. A restart of the service didn't help.

We saw log lines like these in large amounts

Thu Jan  6 15:30:26 2011 : Error: Discarding duplicate request from client FOOFOO3:49910 - ID: 32 due to unfinished request 895
Thu Jan  6 15:30:26 2011 : Error: WARNING: Unresponsive child (id 1314167728) for request 890

The server was also exhausted CPU-wise - and mostly spent it's time in system CPU.
This led me to suspect that the problem was related to some kind of busy waiting on some kind of resource. A quick check in the trusty old /proc filesystem showed that the radius process had a couple of hundred file descriptors pointing to the same file - /var/log/radius/radutmp - an 89MB file with almost 1M short lines.

So what is this radutmp file for? It is updated whenever a client logs in or out, and is used by the command  radwho to see who is logged in at the moment. On this system we have no need for that so I removed the relevant sections from the config and restarted the service - and service was immediately restored.

According to the default and heavily commented config file the radutmp file is not a log file, and does not need to be rotated. But ours have obviously been growing over the last years, so some kind of housekeeping should probably have been done. Out of curiosity I tried starting with an empty radutmp and the original config - and it seemed that the size of the radutmp file affects how hard it is for the server to do it's work  - which makes sense. (but this testing was not done in any way scietifically, and might just be misleading).

I think the main lesson to learn from this is that /proc is your friend - always. Secondary, remove parts of services you don't need - problems might just as well show up there.

Tuesday, November 9, 2010

Quick trick to speed up Firefox (again)

After a few months of use Firefox tends to slow down and spend more and more time doing disk-IO. The reason is that FF uses sqlite for various things like where you have been and what you have filled into forms. These databases get fragmented, and there is no automatic vacuuming of them (but in Thunderbird it seems this is done automagically, go figure).

This have bothered me from time to to me over the last years, and now it happened again. And this time I thought I could write down the quick fix (for lilnux, probably works with Macs too)

  •    stop Firefox
  •    cd into your firefox profile
  •    for i in `find  . -iname \*sqlite`; do echo " -- $i --"; ls -l $i; sqlite3 $i VACUUM; ls -l $i ; echo; done

This reduced places.sqlite from 21 to 3MB, cookies.sqllite almost halved in size, while some of Firefox's own db's remained unchanged.

And most importantly - Firefox feels a lot quicker now:-)