Review, CentOS System Administration Essentials

5-dollar-promo (FIY, Packt offers this and many other titles at just 5 USD until January 6th, 2015 )

csea-cover I just finished to read a copy of “CentOS System Administration Essentials”, written by Andrew Mallett, which I got from the editor for review. Here is what I found.

Virdict: a good book, except a couple of (small) points

I have enjoyed reading this book, which I am going to call CSEA from now on for brevity. I think it is, indeed, a useful, synthetic tool for beginner system administrators. At 174 pages for the printed version, CSEA does provide only the essentials, as its title honestly says, but it does explain them well.

I only have two “negative” (note the quotes) things to say about CSEA, and none of them is serious. The first is about Chapter 1, titled “Taming vi”, which explains a few tricks of both vi/Vim and the Unix command line. In my opinion, those pages are not useful as a starter (even only a motivational one) for readers who had never seen those tools before, and too little for all the others. In other words, while there is nothing technically wrong in that chapter, it doesn’t add any value to the book.

The second “critique” I have about CSEA is something I found in the title itself: something good for the reader, after all, but potentially bad for the author and the editor. Rather than “CentOS System Administration Essentials” I’d have called this book something like “Learning Linux System Administration Essentials using Centos as a reference system“.

By this I mean that most of the content is valid on the great majority of GNU/Linux distributions, with the obvious exception of Chapter 4, which covers RPM and YUM. That’s why I said “good for readers”: CSEA is useful also for people who are not going to use Centos/RHEL or Fedora, and stays useful even for those users, if/when they move to another distribution.

The “potentially bad” part is, just because of what I already explained, the title seems misleading. People who judge a book only by its cover, I mean title, may not buy CSEA only because they are currently using another distribution, even if they would benefit from the book.

What’s in the book?

CSEA is explicitly aimed to people who are thinking to a career as Red Hat Enterprise Linux administrators, but as I said it is immediately useful to whoever wants to follow best practices in Linux administration of every flavour. In general, all chapters deliver what the book preface promises: clear explanations of some essential concepts, synthetic enough to make a quick read, and with just enough practical details to make further reading on each subject much easier. Here is the list of chapters, with a few comments where needed:

  • Taming vi: see above
  • Cold Starts: the GRUB boot loader, and how to customize its behavior
  • CentOS Filesystems: again, good content with a slightly misleading title. Permissions, hard and soft links, SUID and sticky bits, main features of BTRFS: all stuff that every Linux administrators must know, and valid on any distribution, not just Centos and its relatives
  • RPM packages and YUM: learn how to prepare your own RPM packages and local software repositories
  • Linux processes:** nice presentation of little-known precious tools as pgrep, pstree and pkill
  • User management: I liked the coverage of getent and quotas, as well as the “user creation” script
  • LDAP, or how to manage user accounts on many computers, from one single place
  • The Nginx Web server: basic configuration for a LEMP (Linux + “e”Nginx + MySql + PHP). Here I’d have liked a few more pages to include one complete, real world example, e.g. how to make WordPress run with nGinx, but it’s still a good chapter!
  • Configuration management with Puppet
  • Security: Pluggable Authentication Modules (PAM) and SELinux essentials, plus some password hardening tricks
  • “Graduation Day”, that is a summary of the whole book, and some extra best practices for SSH, Nginx and OpenLDAP

Review, The CentOS 6 Linux Server Cookbook

centos_cookbook_1 The CentOS 6 Linux Server Cookbook is a Packt Publishing title first published in April 2013. You can buy it in paper format (about 370 pages) or as an ePUB or PDF file (black and white only, whereas the ePUB version is in colours). In general I believe, especially in these times of PRISM and widespread economic crisis, that the more people learn how to run their own Free Software servers, the better. I’ve already explained how and, above all, why we should all do this with email and (at least) social networking and online publishing. That’s why, when Packt asked me to review the Cookbook, I accepted.

How is the Cookbook?

centos_cookbook_pdf The complete Table of Contents, which lists all the included recipes, is available on the Packt website, so I’ll just summarize it here. After chapters on installation and initial configuration, there are others devoted to:

  • Managing Packages with Yum
  • Securing CentOS
  • Working with Samba and Internet Domains
  • Running Database, Email, WWW and FTP servers

Almost all recipes have the same, four-part structure. After an introduction explaining the goal of the recipe, a “Getting Ready” section tells you what to read, check or do before applying it. The “How to do it” part is the actual recipe: a clear sequence of commands to type or things to write in configuration files. The “How it works” part answers the question “So what did we learn from this experience?”. It goes back to the beginning of the recipe and comments each single step again, adding many details and explaining why and how each instruction relates to the others.

Finally, many recipes also have a “There’s more” section, which describes corner cases or variants of the basic procedure. Some expert Linux users may find many “How it works” sections a bit too repetitive and/or filled with unnecessary details, if not just this side of boring. I consider this a likely possibility because… I had just that feeling myself, several times.

Then again, this is not a book targeting people who are already experts. This is a cookbook to get started quickly without doing dangerous mistakes, in order to become an expert, and it clearly says it at the very beginning:

rather than being about CentOS itself, this is a book that will show you how to get CentOS up and running. It is a book that has been written with the novice-to-intermediate Linux user in mind who is intending to use CentOS as the basis of their next server.

In that perspective, the “repetitions” are much more a feature than a bug. Besides, by being cleanly contained in the “How it works” sections they don’t really slow down readers who just need to learn some commands or refresh their memory with the details of some procedure, so I don’t mind them!

Looking at the recipes that were chosen to be in the cookbook, the initial chapters are very thorough. There is practically everything you need to install CentOS and get started with it. My only nitpick there is that I wouldn’t suggest people to run yum -y update before explaining, in another recipe, that the `-y` switch won’t ask for confirmation. Even Chapter 7, on DNS and BIND, has all the basic information.

Chapters of the last group (“Running … servers”), instead, are less complete, which is both… “bad” and good, for reasons I’ll explain in a moment. As far as the rest of the book goes, what is there is good: pertinent content, written as simply as possible. Things that, instead, are not in the Cookbook but should include more recipes on:

  • partitioning and backup strategies (even of databases)
  • SSH configuration
  • running virtual machines in a CentOS server
  • print services

If it were up to me, I’d trim the Install and FTP chapters (especially the latter!) to make room for recipes on these topics in the next edition.

About the “Running servers” chapters

The recipes in the last three or four chapters cover the minimum one has to do to get those servers up and running without hurting one’s users and the rest of the Internet in the process. They do it well, but proper configuration and administration of database, WWW and email services requires much more. While some potential readers may find it “bad” that the cookbook doesn’t have more on those topics, it is, instead, a good thing.

Almost all the configuration issues and other headaches that I did get over the years with my database, WWW and email servers were “internal” issues. Some were due to bugs in the software, many more to unusual requirements I had, or mistakes I did. In other words, they didn’t depend at all on what distribution those servers were running. This is why it is good that a CentOS cookbook doesn’t spend too much time on certain topics. You will have to go to other places anyway to make email or any LAMP CMS really usable, so why bother?

Is this a book worth buying?

Yes. All in all, I consider this Packt title quite a useful book for beginner CentOS server administrators. I use Centos myself on my personal Web and email servers. Even within the limits I just explained, If I had had such a cookbook when I first set them up, it would have saved me a sensible amount of time, simply for having most of what I needed to do in one place, all explained in one, consistant way.

Other reasons for buying a book like this are that CentOS and other Gnu/Linux distributions specifically developed for servers have both longer release cycles and less differences between them than environments like Fedora and Ubuntu. In other words, this is a book that will remain current more than many other ICT titles, and most of it would be usable even on other server distributions.

How to transform (almost) plain ASCII text to Lulu-ready PDF files, part 3

This is the core script I used to transform a set of plain ASCII files with the Txt2tags markup in one print-ready PDF file. Part 1 of this tutorial explain why I chose txt2tags as source format and Part 2 describes the complete flow.

Book creation workflow

  Listing 1: make_book.sh

    1   #! /bin/bash
    2
    3   CONFIG_DIR='/home/marco/.ebook_config'
    4   PREPROC="%!Includeconf: $CONFIG_DIR/txt2tags_preproc"
    5
    6   CURDIR=`date +%Y%m%d_%H%M_book`
    7   echo "Generating book in $CURDIR"
    8   rm -rf $CURDIR
    9   mkdir $CURDIR
   10   cp $1 $CURDIR/chapter_list
   11   cd $CURDIR
   12
   13   FILELIST=`cat chapter_list | tr "12" " " | perl -n -e "s/.//..//g; print"`
   14
   15   echo ''                                 >  source_tmp
   16   echo  $PREPROC                          >> source_tmp
   17   sed 's/.//%!Include: .//g' $FILELIST >> source_tmp
   18
   19   replace_urls_with_refs.pl source_tmp > source_with_refs
   20
   21   txt2tags -t tex -i source_with_refs -o tex_source.tex
   22   perl -pi.bak -e 's/OOPENSQUARE/[/g'   tex_source.tex
   23   perl -pi.bak -e 's/CLOOSESQUARE/]/g'  tex_source.tex
   24
   25   #remove txt2tags header and footer
   26   LINEE=`tail -n +8 tex_source.tex | wc -l`
   27   LINEE_TESTO=`expr $LINEE - 4`
   28   tail -n +8 tex_source.tex | head -n $LINEE_TESTO > stripped_source.tex
   29
   30   source custom_commands.sh
   31
   32   cat $CONFIG_DIR/header.tex stripped_source.tex $CONFIG_DIR/trailer.tex > complete_source.tex
   33   pdflatex complete_source.tex
   34   pdflatex complete_source.tex
   35
   36   # Generate URL list in HTML format
   37   generate_url_list.pl chapter_list html | txt2tags -t xhtml  -no-headers -i - -o url_list.html

All the txt2tags settings and some LaTeX templates are stored in the dedicated folder $CONFIG_DIR, so you can have a different configuration for each project. The scripts itself only takes one parameter, that is a list of all the source files that must be included in the book. Lines 6 to 12 create a work directory and copy the file list inside it. The files must be written in the file list with their absolute paths, in the order in which they must appear in the book.

Lines 13 to 17 of the script create a single source file (source_tmp) that contains the Include command loading all the txt2tags preprocessing directives (line 16) and then the content of all the individual files, in the right order but without the Include directives that are needed when processing them individually (line 17).

Line 19 runs a separate script, replace_urls_with_refs.pl, that adds the cross-reference numbers to the book text and dumps the result into another temporary file, source_with_refs. This script, not included here for brevity and because you can do without it if you don’t need cross-references like me, only does two things. First it reads a file in the $CONFIG_DIR folder that contains, one per line, all the URLs mentioned in the source files and the corresponding captions, in this format:

http://www.greenparty.org.uk/news/2851 | Windows Vista? A “landfill nightmare”

Next, replace_urls_with_refs.pl reads source_tmp and, whenever it finds a line like:

the UK Green Party officially declared Vista... a ["landfill nightmare" http://www.greenparty.org.uk/news/2851]

generates the right cross-reference number and puts it right after the text associated to the link itself, writing everything to source_with_refs. You can see the effect in the last figure of part 2 of this tutorial. After all this pre-processing, we can finally run txt2tags to produce a LaTeX file (line 21) but right after that we need to put back square brackets in place of some temporary markup generated by replace_urls_with_refs.pl (lines 22/23). The next part of the script, until line 32, remove the default LaTeX header and footer created by txt2tags, replaces them with those stored in the $CONFIG_DIR folder and dumps everything into complete_source.tex: this move allows you to declare whatever LaTeX class you wish to use (I used Memoir), or to give any other LaTeX instruction in the header, without any interference or involvement from txt2tags. figure_05_final_result1 Sometimes I use line 30, also optional, to execute any other post-processing commands on the LaTeX source that, for any reason, it is not convenient to run before. The two invocations of pdflatex in lines 33/34 finish the job: the first creates a first draft of the book to calculate page numbers and other data, the second produces the final PDF with clickable table of contents (see below) and all the other goodies the Memoir LaTeX class can handle. The script in line 37 is the one that scans again all the source files to produce the HTML list of references.

Summing all up

I haven’t described all the gory details and auxiliary scripts at lenght because my main goal with this article is to introduce a txt2tags-based way of working. As I already said, this general method is quite simple but in spite of this, or maybe just for this reason, I find it very, very flexible and powerful. Two great Free Software applications, txt2tags and pdflatex, plus about one hundred lines of codes in three separate scripts, can produce print-ready digital books and/or all the HTML code you need to make an online version or simply an associated website. Besides, you can easily add to the mix programs like curl or ftp upload everything to a server. Personally, my next step will be to extend make_book.sh to generate OpenDocument files thanks to OpenDocument scripting.

How to transform (almost) plain ASCII text to Lulu-ready PDF files, part 2

This page gives a general overview of a flow for transforming ASCII files in print-ready PDF books. The reasons for setting up such a flow in this way are explained in the first part of this tutorial.

Basic workflow

The basic usage of txt2tags is really simple. Once you’ve written something that you need to convert to PDF, text or HTML you can launch the graphic interface with the –gui option or run a command like this at the prompt:

  txt2tags -t xhtml -i mypost.txt -o mypost.html

This will tell txt2tags to save in the mypost.html file an HTML version of the content of mypost.txt. What tells the script the desired output format is the -t (target) option. In this case it is xhtml. Had it been txt or tex, it would have produced a plain text or LaTeX file.

This figure shows the txt2tags source of this article alongside with its plain text, HTML and PDF versions.

As you can see, syntax coloring for txt2tags is already available for Kate (the editor shown above) as well as emacs, Vim and other popular text editors. In order to obtain PDF files, you need to run pdflatex or similar tools on the .tex file created by txt2tags:

  txt2tags -t tex -i mypost.txt -o mypost.tex
  pdflatex mypost.tex

From single files to books

The real power of txt2tags, at least for me, is the fact that it makes easy to work on multiple, completely independent source files as if they were one (more on this later). This makes a breeze to create whole books and sets of HTML pages or other content related to the books, always keeping everything in sync and interlinked with the content of the other versions. Here is a real world case, that is how I created the PDF source and its HTML counterparts for the Guide.

I had some specific requirements, which by the way are common to many other projects of mine. First, I wanted each chapter of the book to be in a separate file. This is both to make incremental backups easier and to collate files from different previous projects without duplicating them. Then I wanted to create an independent, online HTML list of all the web pages mentioned in the book, with the same reference numbers used in the printed copy. I also wanted each of those links in the HTML list to have a descriptive caption that I had written by hand. This is, by the way, the reason why I worked out a custom, but relatively simple cross-referencing system, instead of doing everything in LaTeX. For example, at a certain point in the book I wrote that Windows Vista has been defined a “landfill nightmare”. This is the corresponding sentence in source file, complete with txt2tags markup, which includes the reference URL:

  the UK Green Party officially declared Vista... a ["landfill nightmare" http://www.greenparty.org.uk/news/2851]

I wanted the PDF version of the chapter to include a reference number like [19 - 2], to mean it’s the second cross-reference of chapter 19. I also wanted the HTML list to associate to that link the same number and the caption ‘Windows Vista? A “landfill nightmare”‘. Using the scripts explained below produced the HTML source for the online version of the chapter, the PDF shown in the previous figure and the HTML list with the same references number that you can see online.

The main script I used to create the PDF version ready for upload shown above is published and explained in the last part of this tutorial. The PDF resulting after processing the cross-references is shown here.

How to transform (almost) plain ASCII text to Lulu-ready PDF files, part 1

Many people write far more now that they are constantly online than in the pre-Internet age. Most of this activity is limited to Web or office-style publishing. People either write something that will only appear inside some Web browser or a traditional “document”, that is a single file, more or less nicely formatted for printing. Very often, however, they don’t do it in the most efficient way.

The most common solution for the first scenario still is to write HTML or Wiki-formatted content in a text editor or, through a browser, directly in the authoring interface of CMS systems like Drupal or WordPress. The other approach is even closer to the typewriting era, since it’s limited to using a word processor like OpenOffice. Both methods involve too much manual work for my taste, especially if you often want to reuse or move content from one format to the other.

Since I write a lot for both of the scenarios above and some more, some time ago I realized that I needed a more efficient and flexible workflow: something that was as close as possible to “write ONCE, publish anywhere, re-mixing and processing already written stuff in any possible way without getting mad along the way”. I wanted to write QUICKLY, without thinking at all of where or in which format the text would end up, while being prepared for all cases, from blog to book. I also wanted to use only Free Software that would run quickly even on old computers, with little or no configuration, if necessary on any operating system. Finally, I wanted the possibility to manage, search and process all my writings automatically, with command line utilities or shell scripts.

While I must admit I’m not there yet (especially when working on commission with very particular requirements) I already am pretty close to it in most cases. The rest of this article explains which software I chose and some scripts I wrote to work in this way, that is to write stuff only once and then convert it to a publishing-quality PDF or to HTML with just a few commands.

The first (easy) choice I had to make was “which file format should I use?”. I am a huge fan of the OpenDocument format (ODF), also because ODF is very easy to hack. However, the requirements above immediately exclude it as a source format for most of the stuff I write. The natural Free as in Freedom format for producing good PDF is still TeX or LaTeX, but I wanted HTML and OpenDocument as final options, which aren’t easy to obtain starting from TeX. Besides, I wanted to write quickly text that would be already highly readable in its native format, without too much markup in the way. The obvious conclusion was that I should write plain text marked up with a simple, wiki-like syntax as ReST, Markdown or txt2tags.

I chose the latter for two reasons. First, it has very good export to all formats I need (LaTeX, plain text, MediaWiki and HTML) with the exception of ODF which is, however, relatively easy to add, at least conceptually. Above all, however, txt2tags is simple. Its markup is very readable and easy to learn, but that’s not its biggest quality. figure_01_txt2tags_gui What I like is that the actual software consists of one small Python script that runs in a graphical interface (shown below) or at the prompt with a few options, without depending on any third party library or additional module. Unless your operating system doesn’t support Python, you only need to have that script and any text editor to work.

Sure, you need other software to generate PDF files from LaTeX, and auxiliary shell scripts for pre- and post-processing like what I describe below but (unlike what I found in other markup systems) that’s all Free Software that’s guaranteed to be already packaged in almost all Gnu/Linux distributions (including server-oriented ones, for automatic remote processing!) and also available for Windows. Besides, being a command line tool that can accept text from STDIN or send it to STDOUT, txt2tags integrates perfectly with any other script-based text processing procedure one may need.

Ultra-quick intro to txt2tags syntax pros and cons

The markup syntax of Txt2Tags (see its online demo) leaves the source text very readable. Headers have one or more equal signs at the beginning and end of the line. Numbered and non-numbered list items start with a dash or plus character. Hyperlinks are included in square brackets, asterisks delimitate bold text and slashes are for italic. To build tables you must enclose the content of each cell in pipe signs (|). Comments start with a percent and preprocessing directives with a negated comment (%!). The only two things I care about that txt2tags doesn’t support natively are footnotes and cross-references to tables and figures. For footnotes there’s one workaround in this tutorial and one in the txt2tags configuration file of S. D'Archino. Cross-references are (relatively speaking) much more complicated to add, but are still possible by generalizing the approach described in the final part of this article, if you really need them.

Click to read the How to transform (almost) plain ASCII text to Lulu-ready PDF files, part 2