gnuastro-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuastro-commits] master 91f2d3e 068/113: Imported recent work in maste


From: Mohammad Akhlaghi
Subject: [gnuastro-commits] master 91f2d3e 068/113: Imported recent work in master, conflicts fixed
Date: Fri, 16 Apr 2021 10:33:48 -0400 (EDT)

branch: master
commit 91f2d3e7cb6fa2bd6d6c66c2e8a71e5107c25834
Merge: 486f915 5cb927d
Author: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>

    Imported recent work in master, conflicts fixed
    
    Some conflicts came up and were corrected, especially on the movement of
    the kernel headers to the NoiseChisel and Segment folders separately. Also
    some conflicts on Segment's new clump evaluation were fixed.
---
 NEWS                                               |   26 +-
 THANKS                                             |    1 +
 bin/crop/onecrop.c                                 |   21 +-
 bin/noisechisel/Makefile.am                        |    2 +-
 .../noisechisel}/kernel-2d.h                       |    2 +-
 .../noisechisel}/kernel-3d.h                       |    0
 bin/noisechisel/ui.c                               |    8 +-
 bin/segment/clumps.c                               |   76 +-
 {lib/gnuastro-internal => bin/segment}/kernel-2d.h |   28 +-
 {lib/gnuastro-internal => bin/segment}/kernel-3d.h |    0
 bin/segment/ui.c                                   |   10 +-
 doc/announce-acknowledge.txt                       |    1 +
 doc/gnuastro.texi                                  | 1730 ++++++++++++++------
 lib/Makefile.am                                    |    1 -
 lib/statistics.c                                   |   13 +-
 15 files changed, 1367 insertions(+), 552 deletions(-)

diff --git a/NEWS b/NEWS
index a870f12..6d0eb6e 100644
--- a/NEWS
+++ b/NEWS
@@ -54,6 +54,7 @@ GNU Astronomy Utilities NEWS                          -*- 
outline -*-
     --upperlimitskew: (mean-median)/sigma or skewness of random distribution.
 
   NoiseChisel:
+    - New tutorial on detecting large and extended targets.
     --rawoutput: only output the detection labels and Sky and its STD.
     --ignoreblankinsky: don't set the pixels that are blank in the input to
       blank in the Sky and Sky standard deviation outputs (when
@@ -131,6 +132,15 @@ GNU Astronomy Utilities NEWS                          -*- 
outline -*-
     --history: can be called/written multiple times in one run.
     --comment: can be called/written multiple times in one run.
 
+  MakeCatalog:
+    - The `WCLUMPS' keyword in the objects labeled image is no longer used
+         to see if a clumps catalog should also be made. To build a clumps
+         catalog, you can now use the `--clumpscat' option.
+    - Estimation of noise-level is now done per-pixel over the whole
+         label. Until now the average noise level was used.
+    --objectsfile has been removed. The main input argument is now assumed
+         to be the objects file.
+
   NoiseChisel:
     From this release, NoiseChisel is only in charge of detection and won't
     do segmentation any more. The new Segment program is in charge of
@@ -143,17 +153,13 @@ GNU Astronomy Utilities NEWS                          -*- 
outline -*-
       --detsnminarea ==> --snminarea
       --checkdetsn   ==> --checksn
       --detquant     ==> --snquant
-    - By default the detection map is a binary image (values of 0 or 1).
+    - By default the output detection map is a binary image (values of 0 or 1).
     - With no output name, the output has a `_detected.fits' suffix.
-
-  MakeCatalog:
-    - The `WCLUMPS' keyword in the objects labeled image is no longer used
-         to see if a clumps catalog should also be made. To build a clumps
-         catalog, you can now use the `--clumpscat' option.
-    - Estimation of noise-level is now done per-pixel over the whole
-         label. Until now the average noise level was used.
-    --objectsfile has been removed. The main input argument is now assumed
-         to be the objects file.
+    - [Now in Segment]: For finding true clumps, the difference in the peak
+      of the clump and the highest valued river pixel, divided by the noise
+      standard deviation are used, not the total signal-to-noise ratio. In
+      initial tests, this algorithm was much more promising in detecting
+      clumps over strong gradients.
 
   Table:
     --column: multiple columns (comma separated) can be used in one
diff --git a/THANKS b/THANKS
index 22fafa8..4c9823d 100644
--- a/THANKS
+++ b/THANKS
@@ -23,6 +23,7 @@ support in Gnuastro. The list is ordered alphabetically (by 
family name).
     Fernando Buitrago                    fbuitrago@oal.ul.pt
     Adrian Bunk                          bunk@debian.org
     Rosa Calvi                           rcalvi@iac.es
+    Nushkia Chamba                       chamba@iac.es
     Benjamin Clement                     benjamin.clement@univ-lyon1.fr
     Nima Dehdilani                       nimadehdilani@gmail.com
     Antonio Diaz Diaz                    antonio@gnu.org
diff --git a/bin/crop/onecrop.c b/bin/crop/onecrop.c
index 2cd6eba..a24ad29 100644
--- a/bin/crop/onecrop.c
+++ b/bin/crop/onecrop.c
@@ -701,7 +701,16 @@ onecrop_make_array(struct onecropparams *crp, long 
*fpixel_i,
 
 
 /* The starting and ending points are set in the onecropparams structure
-   for one crop from one image. Crop that region out of the input. */
+   for one crop from one image. Crop that region out of the input.
+
+   On`basekeyname': To be safe, GCC 8.1 (and persumably later versions)
+   assumes that we are writing the full statically allocated space into
+   `regioinkey'! So it prints a warning that you may be writing outside the
+   allocated space! With these variables, we are ultimately just writing
+   the file counters, so we can never (with current techologies!!!) exceed
+   `FLEN_KEYWORD' (which is 75 characters). To avoid compiler warnings, we
+   are just removing a few characters (`FLEN_KEYWORD-5') to allow the
+   suffix and remove the warnings. */
 void
 onecrop(struct onecropparams *crp)
 {
@@ -710,9 +719,9 @@ onecrop(struct onecropparams *crp)
 
   void *array;
   int status=0, anynul=0;
-  char basename[FLEN_KEYWORD];
   fitsfile *ifp=crp->infits, *ofp;
-  gal_fits_list_key_t *headers=NULL;
+  char basekeyname[FLEN_KEYWORD-5];     /* `-5': avoid gcc 8.1+ warnings! */
+  gal_fits_list_key_t *headers=NULL;    /* See above comment for more.    */
   size_t i, j, cropsize=1, ndim=img->ndim;
   char region[FLEN_VALUE], regionkey[FLEN_KEYWORD];
   long fpixel_o[MAXDIM], lpixel_o[MAXDIM], inc[MAXDIM];
@@ -798,9 +807,9 @@ onecrop(struct onecropparams *crp)
 
       /* A section has been added to the cropped image from this input
          image, so save the information of this image. */
-      sprintf(basename, "ICF%zu", crp->numimg);
-      gal_fits_key_write_filename(basename, img->name, &headers);
-      sprintf(regionkey, "%sPIX", basename);
+      sprintf(basekeyname, "ICF%zu", crp->numimg);
+      gal_fits_key_write_filename(basekeyname, img->name, &headers);
+      sprintf(regionkey, "%sPIX", basekeyname);
       gal_fits_key_list_add_end(&headers, GAL_TYPE_STRING, regionkey,
                                 0, region, 0, "Range of pixels used for "
                                 "this output.", 0, NULL);
diff --git a/bin/noisechisel/Makefile.am b/bin/noisechisel/Makefile.am
index 0fab8ea..2428806 100644
--- a/bin/noisechisel/Makefile.am
+++ b/bin/noisechisel/Makefile.am
@@ -34,7 +34,7 @@ astnoisechisel_SOURCES = main.c ui.c detection.c 
noisechisel.c sky.c     \
   threshold.c
 
 EXTRA_DIST = main.h authors-cite.h args.h ui.h detection.h noisechisel.h \
-  sky.h threshold.h
+  sky.h threshold.h kernel-2d.h
 
 
 
diff --git a/lib/gnuastro-internal/kernel-2d.h b/bin/noisechisel/kernel-2d.h
similarity index 98%
copy from lib/gnuastro-internal/kernel-2d.h
copy to bin/noisechisel/kernel-2d.h
index 3462467..06a640b 100644
--- a/lib/gnuastro-internal/kernel-2d.h
+++ b/bin/noisechisel/kernel-2d.h
@@ -1,5 +1,5 @@
 /*********************************************************************
-The default 2D kernel to be used in NoiseChisel and Segment.
+The default 2D kernel to be used in NoiseChisel.
 This is part of GNU Astronomy Utilities (Gnuastro) package.
 
 Original author:
diff --git a/lib/gnuastro-internal/kernel-3d.h b/bin/noisechisel/kernel-3d.h
similarity index 100%
copy from lib/gnuastro-internal/kernel-3d.h
copy to bin/noisechisel/kernel-3d.h
diff --git a/bin/noisechisel/ui.c b/bin/noisechisel/ui.c
index 70204ba..f147b2f 100644
--- a/bin/noisechisel/ui.c
+++ b/bin/noisechisel/ui.c
@@ -411,11 +411,9 @@ ui_prepare_kernel(struct noisechiselparams *p)
   float *f, *ff, *k;
   size_t ndim=p->input->ndim;
 
-/* Since the default kernel has to be identical between NoiseChisel and
-   Segment, we have defined it in a shared header file to be accessible by
-   both programs. */
-#include <gnuastro-internal/kernel-2d.h>
-#include <gnuastro-internal/kernel-3d.h>
+/* Import the default kernel. */
+#include "kernel-2d.h"
+#include "kernel-3d.h"
 
   /* If a kernel file is given, then use it. Otherwise, use the default
      kernel. */
diff --git a/bin/segment/clumps.c b/bin/segment/clumps.c
index 5055216..56d32c8 100644
--- a/bin/segment/clumps.c
+++ b/bin/segment/clumps.c
@@ -229,17 +229,17 @@ clumps_grow_prepare_final(struct clumps_thread_params 
*cltprm)
    below.*/
 enum infocols
   {
-    INFO_X,              /* Flux weighted X center col, 0 by C std. */
-    INFO_Y,              /* Flux weighted Y center col.             */
-    INFO_Z,              /* Flux weighted Z center col.             */
-    INFO_SFF,            /* Sum of non-negative pixels (for X,Y).   */
-    INFO_INFLUX,         /* Tatal flux within clump.                */
-    INFO_INAREA,         /* Tatal area within clump.                */
-    INFO_RIVFLUX,        /* Tatal flux within rivers around clump.  */
-    INFO_RIVAREA,        /* Tatal area within rivers around clump.  */
-    INFO_INSTD,          /* Standard deviation at clump center.     */
-
-    INFO_NCOLS,          /* Total number of columns.                */
+    INFO_X,              /* Flux weighted X center col, 0 by C std.       */
+    INFO_Y,              /* Flux weighted Y center col.                   */
+    INFO_Z,              /* Flux weighted Z center col.                   */
+    INFO_SFF,            /* Sum of non-negative pixels (for X,Y).         */
+    INFO_INSTD,          /* Standard deviation at clump center.           */
+    INFO_INAREA,         /* Tatal area within clump.                      */
+    INFO_RIVAREA,        /* Tatal area within rivers around clump.        */
+    INFO_PEAK_RIVER,     /* Peak (min or max) river value around a clump. */
+    INFO_PEAK_CENTER,    /* Peak (min or max) clump value.                */
+
+    INFO_NCOLS,          /* Total number of columns in the `info' table.  */
   };
 static void
 clumps_get_raw_info(struct clumps_thread_params *cltprm)
@@ -251,7 +251,7 @@ clumps_get_raw_info(struct clumps_thread_params *cltprm)
   double *row, *info=cltprm->info->array;
   size_t nngb=gal_dimension_num_neighbors(ndim);
   struct gal_tile_two_layer_params *tl=&p->cp.tl;
-  float *values=p->input->array, *std=p->std->array;
+  float *values=p->conv->array, *std=p->std->array;
   size_t *dinc=gal_dimension_increment(ndim, dsize);
   int32_t lab, nlab, *ngblabs, *clabel=p->clabel->array;
 
@@ -266,18 +266,27 @@ clumps_get_raw_info(struct clumps_thread_params *cltprm)
         /* This pixel belongs to a clump. */
         if( clabel[ *a ]>0 )
           {
-            lab=clabel[*a];
-            ++info[ lab * INFO_NCOLS + INFO_INAREA ];
-            info[   lab * INFO_NCOLS + INFO_INFLUX ] += values[*a];
+            /* For easy reading. */
+            row = &info [ clabel[*a] * INFO_NCOLS ];
+
+            /* Get the area and flux. */
+            ++row[ INFO_INAREA ];
             if( values[*a]>0.0f )
               {
                 gal_dimension_index_to_coord(*a, ndim, dsize, coord);
-                info[ lab * INFO_NCOLS + INFO_SFF ] += values[*a];
-                info[ lab * INFO_NCOLS + INFO_X   ] += values[*a] * coord[0];
-                info[ lab * INFO_NCOLS + INFO_Y   ] += values[*a] * coord[1];
+                row[   INFO_SFF ] += values[*a];
+                row[   INFO_X   ] += values[*a] * coord[0];
+                row[   INFO_Y   ] += values[*a] * coord[1];
                 if(ndim==3)
-                  info[ lab * INFO_NCOLS + INFO_Z ] += values[*a] * coord[2];
+                  row[ INFO_Z   ] += values[*a] * coord[2];
               }
+
+            /* In the loop `INFO_INAREA' is just the pixel counter of this
+               clump. The pixels are sorted by flux (decreasing for
+               positive clumps and increasing for negative). So the second
+               extremum value is just the second pixel of the clump. */
+            if( row[ INFO_INAREA ]==1.0f )
+              row[ INFO_PEAK_CENTER ] = values[*a];
           }
 
         /* This pixel belongs to a river (has a value of zero and isn't
@@ -313,8 +322,11 @@ clumps_get_raw_info(struct clumps_thread_params *cltprm)
                     if(i==ii)
                       {
                         ngblabs[ii++] = nlab;
-                        ++info[nlab * INFO_NCOLS + INFO_RIVAREA];
-                        info[  nlab * INFO_NCOLS + INFO_RIVFLUX]+=values[*a];
+                        row = &info[ nlab * INFO_NCOLS ];
+
+                        ++row[INFO_RIVAREA];
+                        if( row[INFO_RIVAREA]==1.0f )
+                          row[INFO_PEAK_RIVER] = values[*a];
                       }
                   }
               } );
@@ -386,7 +398,7 @@ clumps_make_sn_table(struct clumps_thread_params *cltprm)
 
   float *snarr;
   int32_t *indarr=NULL;
-  double I, O, Ni, var, *row;
+  double C, R, std, Ni, *row;
   int sky0_det1=cltprm->clprm->sky0_det1;
   size_t i, ind, counter=0, infodsize[2]={tablen, INFO_NCOLS};
 
@@ -438,33 +450,31 @@ clumps_make_sn_table(struct clumps_thread_params *cltprm)
     {
       /* For readability. */
       row = &( ((double *)(cltprm->info->array))[ i * INFO_NCOLS ] );
-      Ni  = row[ INFO_INAREA ];
-      I   = row[ INFO_INFLUX ]  / row[ INFO_INAREA ];
-      O   = row[ INFO_RIVFLUX ] / row[ INFO_RIVAREA ];
+      Ni  = row[ INFO_INAREA      ];
+      R   = row[ INFO_PEAK_RIVER  ];
+      C   = row[ INFO_PEAK_CENTER ];
 
 
       /* If the inner flux is smaller than the outer flux (happens only in
          noise cases) or the area is smaller than the minimum area to
          calculate signal-to-noise, then set the S/N of this segment to
          zero. */
-      if( (p->minima ? O>I : I>O) && Ni>p->snminarea )
+      if( Ni>p->snminarea )
         {
-          /* For easy reading, define `var' for variance.  */
-          var = row[INFO_INSTD] * row[INFO_INSTD];
-
           /* Calculate the Signal to noise ratio, if we are on the noise
              regions, we don't care about the IDs of the clumps anymore, so
              store the Signal to noise ratios contiguously (for easy
              sorting and etc). Note that counter will always be smaller and
              equal to i. */
+          std=row[INFO_INSTD];
           ind = sky0_det1 ? i : counter++;
           if(cltprm->snind) indarr[ind]=i;
-          snarr[ind]=( sqrt(Ni/p->cpscorr) * ( p->minima ? O-I : I-O)
-                       / sqrt( (I>0?I:-1*I) + (O>0?O:-1*O) + var ) );
+          snarr[ind] = ( p->minima ? R-C : C-R ) / std;
         }
       else
         {
-          /* Only over detections, we should put a NaN when the S/N  */
+          /* Only over detections, we should put a NaN when the S/N isn't
+             calculated.  */
           if(sky0_det1)
             {
               snarr[i]=NAN;
@@ -1021,7 +1031,7 @@ clumps_true_find_sn_thresh(struct segmentparams *p)
   p->clumpsnthresh = *((float *)(quant->array));
   if(!p->cp.quiet)
     {
-      if( asprintf(&msg, "Clump S/N: %.2f (%.3f quant of %zu).",
+      if( asprintf(&msg, "Clump peak S/N: %g (%.3f quant of %zu).",
                    p->clumpsnthresh, p->snquant, sn->size)<0 )
         error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
       gal_timing_report(&t1, msg, 2);
diff --git a/lib/gnuastro-internal/kernel-2d.h b/bin/segment/kernel-2d.h
similarity index 70%
rename from lib/gnuastro-internal/kernel-2d.h
rename to bin/segment/kernel-2d.h
index 3462467..2db052f 100644
--- a/lib/gnuastro-internal/kernel-2d.h
+++ b/bin/segment/kernel-2d.h
@@ -1,5 +1,5 @@
 /*********************************************************************
-The default 2D kernel to be used in NoiseChisel and Segment.
+The default 2D kernel to be used in Segment.
 This is part of GNU Astronomy Utilities (Gnuastro) package.
 
 Original author:
@@ -43,7 +43,7 @@ along with Gnuastro. If not, see 
<http://www.gnu.org/licenses/>.
 
      export GSL_RNG_SEED=1
      export GSL_RNG_TYPE=ranlxs2
-     astmkprof --kernel=gaussian,2,5 --oversample=1 --envseed 
--numrandom=100000
+     astmkprof --kernel=gaussian,1.5,5 --oversample=1 --envseed 
--numrandom=100000
 
    Convert it to C code
    --------------------
@@ -102,28 +102,20 @@ along with Gnuastro. If not, see 
<http://www.gnu.org/licenses/>.
      $ astbuildprog -q kernel.c > kernel-2d.h
  */
 
-size_t kernel_2d_dsize[2]={11, 11};
-float kernel_2d[121]={0, 0, 0, 0, 0, 2.599797e-08, 0, 0, 0, 0, 0,
+size_t kernel_2d_dsize[2]={7, 7};
+float kernel_2d[49]={0, 3.992438e-07, 8.88367e-06, 2.470061e-05, 8.96143e-06, 
3.961747e-07, 0,
 
-0, 0, 3.008479e-08, 6.938075e-07, 4.493532e-06, 8.276223e-06, 4.515019e-06, 
6.947793e-07, 3.04628e-08, 0, 0,
+3.961645e-07, 8.509836e-05, 0.001905851, 0.005246491, 0.001900595, 
8.399635e-05, 3.977891e-07,
 
-0, 3.009687e-08, 2.556034e-06, 5.936867e-05, 0.0003808578, 0.0007126221, 
0.0003827095, 5.902729e-05, 2.553342e-06, 2.978137e-08, 0,
+8.959198e-06, 0.00190299, 0.04301567, 0.1174493, 0.0428412, 0.001911332, 
8.923742e-06,
 
-0, 7.021852e-07, 5.912285e-05, 0.00137637, 0.008863639, 0.01648383, 
0.008855942, 0.001365171, 5.925718e-05, 7.021184e-07, 0,
+2.455387e-05, 0.005209642, 0.1172349, 0.3221542, 0.1174603, 0.005248448, 
2.447141e-05,
 
-0, 4.490787e-06, 0.0003826718, 0.008857355, 0.05742518, 0.1062628, 0.05727194, 
0.008880079, 0.0003826067, 4.478989e-06, 0,
+9.018465e-06, 0.001908686, 0.04294781, 0.1173853, 0.04282322, 0.001887719, 
8.985901e-06,
 
-2.595735e-08, 8.31301e-06, 0.0007113572, 0.01640853, 0.1061298, 0.1971036, 
0.1062611, 0.01647962, 0.000708363, 8.379878e-06, 2.593496e-08,
+3.969509e-07, 8.505241e-05, 0.001909065, 0.005238522, 0.001906396, 
8.491996e-05, 3.998521e-07,
 
-0, 4.516684e-06, 0.0003846966, 0.008860709, 0.05739478, 0.1062216, 0.05725683, 
0.00881713, 0.000383981, 4.473017e-06, 0,
-
-0, 6.950547e-07, 5.920586e-05, 0.00137483, 0.00887785, 0.0164709, 0.008855232, 
0.001372743, 5.939038e-05, 7.016624e-07, 0,
-
-0, 3.006322e-08, 2.587011e-06, 5.92911e-05, 0.0003843824, 0.0007118155, 
0.000386519, 5.974654e-05, 2.585581e-06, 3.048036e-08, 0,
-
-0, 0, 3.041056e-08, 7.05225e-07, 4.497418e-06, 8.388542e-06, 4.478833e-06, 
7.018358e-07, 2.995504e-08, 0, 0,
-
-0, 0, 0, 0, 0, 2.567377e-08, 0, 0, 0, 0, 0};
+0, 3.998288e-07, 9.012383e-06, 2.466673e-05, 9.072039e-06, 4.024199e-07, 0};
 
 
 #endif
diff --git a/lib/gnuastro-internal/kernel-3d.h b/bin/segment/kernel-3d.h
similarity index 100%
rename from lib/gnuastro-internal/kernel-3d.h
rename to bin/segment/kernel-3d.h
diff --git a/bin/segment/ui.c b/bin/segment/ui.c
index 228190b..246d336 100644
--- a/bin/segment/ui.c
+++ b/bin/segment/ui.c
@@ -501,11 +501,9 @@ ui_prepare_kernel(struct segmentparams *p)
   float *f, *ff, *k;
   size_t ndim=p->input->ndim;
 
-/* Since the default kernel has to be identical between NoiseChisel and
-   Segment, we have defined it in a shared header file to be accessible by
-   both programs. */
-#include <gnuastro-internal/kernel-2d.h>
-#include <gnuastro-internal/kernel-3d.h>
+/* Import the default kernel. */
+#include "kernel-2d.h"
+#include "kernel-3d.h"
 
   /* If a kernel file is given, then use it. Otherwise, use the default
      kernel. */
@@ -946,7 +944,7 @@ ui_read_check_inputs_setup(int argc, char *argv[], struct 
segmentparams *p)
                 printf("  - No convolution requested.\n");
             }
           else
-            printf("  - Kernel: FWHM=2 pixel Gaussian.\n");
+            printf("  - Kernel: FWHM=1.5 pixel Gaussian.\n");
         }
       printf("  - Detection: %s (hdu: %s)\n", p->useddetectionname, p->dhdu);
     }
diff --git a/doc/announce-acknowledge.txt b/doc/announce-acknowledge.txt
index c74b8fa..1fddafe 100644
--- a/doc/announce-acknowledge.txt
+++ b/doc/announce-acknowledge.txt
@@ -1,6 +1,7 @@
 People who's help must be acknowledged in the next release.
 
 Leindert Boogaard
+Nushkia Chamba
 Nima Dehdilani
 Antonio Diaz Diaz
 Lee Kelvin
diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index ae3b209..39e4e9b 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -237,9 +237,10 @@ New to GNU/Linux?
 
 Tutorials
 
-* Hubble visually checks and classifies his catalog::  Check a catalog.
 * Sufi simulates a detection::  Simulating a detection.
 * General program usage tutorial::  Usage of all programs in a good way.
+* Detecting large extended targets::  Using NoiseChisel for huge extended 
targets.
+* Hubble visually checks and classifies his catalog::  Visual checks on a 
catalog.
 
 Installation
 
@@ -447,7 +448,7 @@ Sky value
 
 NoiseChisel
 
-* NoiseChisel changes after publication::  Changes to the software after 
publication.
+* NoiseChisel changes after publication::  NoiseChisel updates after paper's 
publication.
 * Invoking astnoisechisel::     Options and arguments for NoiseChisel.
 
 Invoking NoiseChisel
@@ -458,6 +459,7 @@ Invoking NoiseChisel
 
 Segment
 
+* Segment changes after publication::  Segment updates after paper's 
publication.
 * Invoking astsegment::         Inputs, outputs and options to Segment
 
 Invoking Segment
@@ -709,10 +711,9 @@ environment. Gnuastro also comes with a large set of 
libraries, so you can
 write your own programs using Gnuastro's building blocks, see @ref{Review
 of library fundamentals} for an introduction.
 
-Finally it must be mentioned that in Gnuastro, no change to any
-program will be released before it has been fully documented in this
-here first. As discussed in @ref{Science and its tools} this is the
-founding basis of the Gnuastro.
+In Gnuastro, no change to any program or library will be commited to its
+history, before it has been fully documented here first. As discussed in
+@ref{Science and its tools} this is a founding principle of the Gnuastro.
 
 @menu
 * Quick start::                 A quick start to installation.
@@ -879,7 +880,7 @@ software/statistical-method really does (especially as it 
gets more
 complicated), and thus the scientific interpretation of the result. This
 attitude is further encouraged through non-free
 software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}},
-poorly written (or non-existant) scientific software manuals, and
+poorly written (or non-existent) scientific software manuals, and
 non-reproducible papers@footnote{Where the authors omit many of the
 analysis/processing ``details'' from the paper by arguing that they would
 make the paper too long/unreadable. However, software methods do allows us
@@ -913,38 +914,42 @@ engineering courses and thus used in most software. The 
GNU Astronomy
 Utilities are an effort to tackle this issue.
 
 Gnuastro is not just a software, this book is as important to the idea
-behind Gnuastro as the source code (software). This book has tried to
-learn from the success of the ``Numerical Recipes'' book in educating
-those who are not software engineers and computer scientists but still
-heavy users of computational algorithms, like astronomers. There are
-two major differences: the code and the explanations are segregated:
-the code is moved within the actual Gnuastro software source code and
-the underlying explanations are given here. In the source code every
-non-trivial step is heavily commented and correlated with this book,
-it follows the same logic of this book, and all the programs follow a
-similar internal data, function and file structure, see @ref{Program
-source}. Complementing the code, this book focuses on thoroughly
-explaining the concepts behind those codes (history, mathematics,
-science, software and usage advise when necessary) along with detailed
-instructions on how to run the programs. At the expense of frustrating
-``professionals'' or ``experts'', this book and the comments in the
-code also intentionally avoid jargon and abbreviations. The source
-code and this book are thus intimately linked, and when considered as
-a single entity can be thought of as a real (an actual software
-accompanying the algorithms) ``Numerical Recipes'' for astronomy.
-
-The other major and arguably more important difference is that ``Numerical
-Recipes'' does not allow you to distribute any code that you have learned
-from it. So while it empowers the privileged individual who has access to
-it, it exacerbates social ignorance. For example it does not allow you to
-release your software's source code if you have used their codes, you can
-only publicly release binaries (a black box) to the community. Exactly at
-the opposite end of the spectrum, Gnuastro's source code is released under
-the GNU general public license (GPL) and this book is released under the
-GNU free documentation license. You are therefore free to distribute any
-software you create using parts of Gnuastro's source code or text, or
-figures from this book, see @ref{Your rights}. While developing the source
-code and this book together, the developers of Gnuastro aim to impose the
+behind Gnuastro as the source code (software). This book has tried to learn
+from the success of the ``Numerical Recipes'' book in educating those who
+are not software engineers and computer scientists but still heavy users of
+computational algorithms, like astronomers. There are two major
+differences.
+
+The first difference is that Gnuastro's code and the background information
+are segregated: the code is moved within the actual Gnuastro software
+source code and the underlying explanations are given here in this book. In
+the source code, every non-trivial step is heavily commented and correlated
+with this book, it follows the same logic of this book, and all the
+programs follow a similar internal data, function and file structure, see
+@ref{Program source}. Complementing the code, this book focuses on
+thoroughly explaining the concepts behind those codes (history,
+mathematics, science, software and usage advise when necessary) along with
+detailed instructions on how to run the programs. At the expense of
+frustrating ``professionals'' or ``experts'', this book and the comments in
+the code also intentionally avoid jargon and abbreviations. The source code
+and this book are thus intimately linked, and when considered as a single
+entity can be thought of as a real (an actual software accompanying the
+algorithms) ``Numerical Recipes'' for astronomy.
+
+The second major, and arguably more important, difference is that
+``Numerical Recipes'' does not allow you to distribute any code that you
+have learned from it. In other words, it does not allow you to release your
+software's source code if you have used their codes, you can only publicly
+release binaries (a black box) to the community. Therefore, while it
+empowers the privileged individual who has access to it, it exacerbates
+social ignorance. Exactly at the opposite end of the spectrum, Gnuastro's
+source code is released under the GNU general public license (GPL) and this
+book is released under the GNU free documentation license. You are
+therefore free to distribute any software you create using parts of
+Gnuastro's source code or text, or figures from this book, see @ref{Your
+rights}.
+
+With these principles in mind, Gnuastro's developers aim to impose the
 minimum requirements on you (in computer science, engineering and even the
 mathematics behind the tools) to understand and modify any step of Gnuastro
 if you feel the need to do so, see @ref{Why C} and @ref{Program design
@@ -953,15 +958,21 @@ philosophy}.
 @cindex Galileo, G.
 Imagine if Galileo did not have the technical knowledge to build a
 telescope. Astronomical objects could not be seen with the Dutch military
-design of the telescope. In the beginning of his ``The Sidereal Messenger''
-(1610) he cautions the readers on this issue and instructs them on how to
-build a suitable instrument: without a detailed description of @emph{how}
-he made his observations, no one would believe him. The same is true today,
-science cannot progress with a black box. Before he actually saw the moons
-of Jupiter, the mountains on the Moon or the crescent of Venus, he was
-“evasive” to Kepler@footnote{Galileo G. (Translated by Maurice
-A. Finocchiaro). @emph{The essential Galileo}. Hackett publishing company,
-first edition, 2008.}. Science is not independent of its tools.
+design of the telescope. In other words, he couldn't have asked a lens
+maker to build a modified version. It is hard to imagine having the idea of
+modifying the optics for astronomy, if he wasn't already faimilar, and
+experienced, in optics @emph{guided} by his astronomical curiosity. In the
+beginning of his ``The Sidereal Messenger'' (published in 1610) he cautions
+the readers on this issue and instructs them on how to build a suitable
+instrument: without a detailed description of @emph{how} he made his
+observations, no one would believe him. The same is true today: science
+cannot progress with a black box, and technical knowledge (to experiment on
+its tool tools, software in this context), is critical to scientific
+vitality. Before he actually saw the moons of Jupiter, the mountains on the
+Moon or the crescent of Venus, he was “evasive” to Kepler@footnote{Galileo
+G. (Translated by Maurice A. Finocchiaro). @emph{The essential
+Galileo}. Hackett publishing company, first edition, 2008.}. Science is not
+independent of its tools.
 
 @cindex Ken Thomson
 @cindex Stroustrup, Bjarne
@@ -971,13 +982,14 @@ Thomson (the designer or the Unix operating system) says 
``@emph{I abhor a
 system designed for the `user' if that word is a coded pejorative meaning
 `stupid and unsophisticated'}.'' Certainly no scientist (user of a
 scientific software) would want to be considered a believer in magic, or
-stupid and unsophisticated. However, this often happen when scientists get
-too distant from the raw data and methods and are mainly indulging
-themselves in their own high-level (abstract) models (creations).
+stupid and unsophisticated.
 
-Roughly five years before special relativity and about two decades before
-quantum mechanics fundamentally changed Physics, Kelvin is quoted as
-saying:
+However, this can happen when scientists get too distant from the raw data
+and methods and are mainly indulging themselves in their own high-level
+(abstract) models (creations) and are happy with the data acquisition and
+analysis methods in vogue. For example, roughly five years before special
+relativity and about two decades before quantum mechanics fundamentally
+changed Physics, Kelvin is quoted as saying:
 
 @quotation
 @cindex Lord Kelvin
@@ -988,7 +1000,7 @@ is more and more precise measurement.
 @end quotation
 
 @noindent
-A few years earlier, in a speech Albert. A. Michelson said:
+A few years earlier Albert. A. Michelson made the following statement:
 
 @quotation
 @cindex Albert. A. Michelson
@@ -1587,7 +1599,7 @@ updated/new features, or dependencies (see 
@ref{Dependencies}).
 To subscribe to this list, please visit
 @url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}. Traffic (number
 of mails per unit time) in this list is designed to be very low: only a
-handful of mails per year. Previous annoucements are available on
+handful of mails per year. Previous announcements are available on
 @url{http://lists.gnu.org/archive/html/info-gnuastro/, its archive}.
 
 
@@ -1697,275 +1709,94 @@ Centre de Recherche Astrophysique de Lyon, University 
of Lyon 1, France.@*
 
 @cindex Tutorial
 @cindex Cookbook
-In this chapter we give several tutorials or cookbooks on how to use
-the various tools in Gnuastro for your scientific purposes. In these
-tutorials, we have intentionally avoided too many cross references to
-make it more easily readable. To get more information about a
-particular program, you can visit the section with the same name as
-the program in this book. Each program section starts by explaining
-the general concepts behind what it does. If you only want to see an
-explanation of the options and arguments of any program, see the
-subsection titled `Invoking ProgramName'. See @ref{Conventions}, for
-an explanation of the conventions we use in the example codes through
-the book.
-
-The tutorials in this section use a fictional setting of some
-historical figures in the history of astronomy. We have tried to show
-how Gnuastro would have been helpful for them in making their
-discoveries if there were GNU/Linux computers in their times! Please
-excuse us for any historical inaccuracy, this is not intended to be a
-historical reference. This form of presentation can make the tutorials
-more pleasant and entertaining to read while also being more practical
-(explaining from a user's point of view)@footnote{This form of
-presenting a tutorial was influenced by the PGF/TikZ and Beamer
-manuals. The first provides graphic capabilities, while with the
-second you can make presentation slides in @TeX{} and @LaTeX{}. In
-these manuals, Till Tantau (author of the manual) uses Euclid as the
-protagonist. There are also some nice words of wisdom for Unix-like
-systems called ``Rootless Root'':
-@url{http://catb.org/esr/writings/unix-koans/}. These also have a
-similar style but they use a mythical figure named Master Foo. If you
-already have some experience in Unix-like systems, you will definitely
-find these ``Unix Koans'' very entertaining.}. The main reference for
-the historical facts mentioned in these fictional settings was
-Wikipedia.
+To help new users get started smoothly with Gnuastro, in this chapter
+several thoroughly elaborated tutorials or cookbooks are provided in this
+chapter to demonstrate the capabilities of Gnuastro and the best practices
+of using them.
+
+We strongly recommend going through these tutorials to get a good feeling
+of how the programs are related (built in a modular design to be used
+together in a pipeline) and demonstrate the Unix-based tought-process that
+went into creating them. Therefore these tutorials will greatly help in
+using Gnuastro's programs (and generally the Unix-like command-line
+environment) effectively.
+
+In @ref{Sufi simulates a detection}, we'll start with a
+fictional@footnote{The two historically motivated tutorials (@ref{Sufi
+simulates a detection} and @ref{Hubble visually checks and classifies his
+catalog}) are not intended to be a historical reference (the historical
+facts of this fictional tutorial used Wikipedia as a reference). This form
+of presenting a tutorial was influenced by the PGF/TikZ and Beamer
+manuals. The first provides graphic capabilities in @TeX{} and @LaTeX{},
+while with the second you can make presentation slides. In these manuals,
+Till Tantau (author of the manual) uses Euclid as the protagonist. On a
+similar topic, there are also some nice words of wisdom for Unix-like
+systems called @url{http://catb.org/esr/writings/unix-koans, Rootless
+Root}. These also have a similar style but they use a mythical figure named
+Master Foo. If you already have some experience in Unix-like systems, you
+will definitely find these Unix Koans very entertaining/educative.}
+tutorial explaining how Abd al-rahman Sufi (903 -- 986 A.D., the first
+recorded description of ``nebulous'' objects in the heavens is attributed
+to him) could have used some of Gnuastro's programs for a realistic
+simulation of his observations and see if his detection of nebulous objects
+was trustable. Because all conditions are under control in a simulated
+environment/dataset, they can be a very valuable tool to inspect the
+limitations of your data analysis and processing. But they need to be as
+realistic as possible, so the first tutorial is dedicated to this important
+step of an analysis.
+
+The next two tutorials (@ref{General program usage tutorial} and
+@ref{Detecting large extended targets}) use real input datasets from some
+of the deep Hubble Space Telescope (HST) images and the Sloan Digital Sky
+Survey (SDSS) respectively. Their aim is to demonstrate some real-world
+problems that can be solved with Gnuastro's programs.
+
+The ultimate aim of @ref{General program usage tutorial} is to detect
+galaxies in a deep HST image, measure their positions and brightness and
+select those with the strongest colors. In the process it takes many
+detours to introduce you to the useful capabilities of many of the
+programs. If you don't have much time and can only try one of the
+tutorials, we recommend this one.
+
+@cindex PSF
+@cindex Point spread function
+@ref{Detecting large extended targets} deals with a major problem in
+astronomy: effectively detecting the faint outer wings of bright (and
+large) nearby galaxies to extremely low surface brightness levels (roughly
+1/20th of the local noise level in the example discussed). This is an
+important issue, especially in wide surveys. Because bright/large galaxies
+and stars@footnote{Stars also have similarly large and extended wings due
+to the point spread function, see @ref{PSF}.}, cover a significant fraction
+of the survey area. Besides the interesting scientific questions in these
+low-surface brightness features, failure to properly detect them will bias
+the measurements of the background objects and the survey's noise
+estimates.
+
+Finally, in @ref{Hubble visually checks and classifies his catalog}, we go
+into the historical/fictional world again to see how Hubble could use
+Gnuastro's programs to visually check and classify his sample of galaxies
+which ultimately lead him to the ``Hubble fork'' classification of galaxy
+morphologies.
+
+In these tutorials, we have intentionally avoided too many cross references
+to make it more easily readable. For more information about a particular
+program, you can visit the section with the same name as the program in
+this book. Each program section in the subsequent chapters starts by
+explaining the general concepts behind what it does. If you only want to
+see an explanation of the options and arguments of any program, see the
+subsection titled ``Invoking ProgramName'', for example @ref{Invoking
+astnoisechisel}. See @ref{Conventions}, for an explanation of the
+conventions we use in the example codes through the book.
 
 @menu
-* Hubble visually checks and classifies his catalog::  Check a catalog.
 * Sufi simulates a detection::  Simulating a detection.
 * General program usage tutorial::  Usage of all programs in a good way.
+* Detecting large extended targets::  Using NoiseChisel for huge extended 
targets.
+* Hubble visually checks and classifies his catalog::  Visual checks on a 
catalog.
 @end menu
 
-@node Hubble visually checks and classifies his catalog, Sufi simulates a 
detection, Tutorials, Tutorials
-@section Hubble visually checks and classifies his catalog
-
-@cindex Edwin Hubble
-In 1924 Hubble@footnote{Edwin Powell Hubble (1889 -- 1953 A.D.) was an
-American astronomer who can be considered as the father of
-extra-galactic astronomy, by proving that some nebulae are too distant
-to be within the Galaxy. He then went on to show that the universe
-appears to expand and also done a visual classification of the
-galaxies that is known as the Hubble fork.} announced his discovery
-that some of the known nebulous objects are too distant to be within
-the the Milky Way (or Galaxy) and that they were probably distant
-Galaxies@footnote{Note that at that time, ``Galaxy'' was a proper noun
-used to refer to the Milky way. The concept of a galaxy as we define
-it today had not yet become common. Hubble played a major role in
-creating today's concept of a galaxy.} in their own right. He had also
-used them to show that the redshift of the nebulae increases with
-their distance. So now he wants to study them more accurately to see
-what they actually are. Since they are nebulous or amorphous, they
-can't be modeled (like stars that are always a point) easily. So there
-is no better way to distinguish them than to visually inspect them and
-see if it is possible to classify these nebulae or not.
-
-Hubble has stored all the FITS images of the objects he wants to visually
-inspect in his @file{/mnt/data/images} directory. He has also stored his
-catalog of extra-galactic nebulae in
-@file{/mnt/data/catalogs/extragalactic.txt}. Any normal user on his
-GNU/Linux system (including himself) only has read access to the contents
-of the @file{/mnt/data} directory. He has done this by running this command
-as root:
-
-@example
-# chmod -R 755 /mnt/data
-@end example
-
-@noindent
-Hubble has done this intentionally to avoid mistakenly deleting or
-modifying the valuable images he has taken at Mount Wilson while he is
-working as an ordinary user. Retaking all those images and data is
-simply not an option. In fact they are also in another hard disk
-(@file{/dev/sdb1}). So if the hard disk which stores his GNU/Linux
-distribution suddenly malfunctions due to work load, his data is not
-in harms way. That hard disk is only mounted to this directory when he
-wants to use it with the command:
-
-@example
-# mount /dev/sdb1 /mnt/data
-@end example
-
-@noindent
-In short, Hubble wants to keep his data safe and fortunately by
-default Gnuastro allows for this.  Hubble creates a temporary
-@file{visualcheck} directory in his home directory for this check. He
-runs the following commands to make the directory and change to
-it@footnote{The @code{pwd} command is short for ``Print Working
-Directory'' and @code{ls} is short for ``list'' which shows the
-contents of a directory.}:
-
-@example
-$ mkdir ~/visualcheck
-$ cd ~/visualcheck
-$ pwd
-/home/edwin/visualcheck
-$ ls
-@end example
-
-Hubble has multiple images in @file{/mnt/data/images}, some of his targets
-might be on the edges of an image and so several images need to be stitched
-to give a good view of them. Also his extra-galactic targets belong to
-various pointings in the sky, so they are not in one large
-image. Gnuastro's Crop is just the program he wants. The catalog in
-@file{extragalactic.txt} is a plain text file which stores the basic
-information of all his known 200 extra-galactic nebulae. In its second
-column it has each object's Right Ascension (the first column is a label he
-has given to each object) and in the third the object's declination.
-
-@example
-$ astcrop --coordcol=2 --coordcol=3 /mnt/data/images/*.fits     \
-          --mode=wcs /mnt/data/catalogs/extragalactic.txt
-Crop started on Tue Jun  14 10:18:11 1932
-  ---- ./4_crop.fits                  1 1
-  ---- ./2_crop.fits                  1 1
-  ---- ./1_crop.fits                  1 1
-[[[ Truncated middle of list ]]]
-  ---- ./198_crop.fits                1 1
-  ---- ./195_crop.fits                1 1
-  - 200 images created.
-  - 200 were filled in the center.
-  - 0 used more than one input.
-Crop finished in:  2.429401 (seconds)
-@end example
-
-
-@cindex Asynchronous thread allocation
-@noindent
-Hubble already knows that thread allocation to the the CPU cores is
-asynchronous. Hence each time you run it, the order of which job gets done
-first differs. When using Crop the order of outputs is irrelevant since
-each crop is independent of the rest. This is why the crops are not
-necessarily created in the same input order. He is satisfied with the
-default width of the outputs (which he inspected by running @code{$ astcrop
--P}). If he wanted a different width for the cropped images, he could do
-that with the @option{--wwidth} option which accepts a value in
-arc-seconds.  When he lists the contents of the directory again he finds
-his 200 objects as separate FITS images.
-
-@example
-$ ls
-1_crop.fits 2_crop.fits ... 200_crop.fits
-@end example
-
-@cindex GNU Parallel
-The FITS image format was not designed for efficient/fast viewing, but
-mainly for accurate storing of the data. So he chooses to convert the
-cropped images to a more common image format to view them more quickly and
-easily through standard image viewers (which load much faster than FITS
-image viewer). JPEG is one of the most recognized image formats that is
-supported by most image viewers. Fortunately Gnuastro has just such a tool
-to convert various types of file types to and from each other:
-ConvertType. Hubble has already heard of GNU Parallel from one of his
-colleagues at Mount Wilson Observatory. It allows multiple instances of a
-command to be run simultaneously on the system, so he uses it in
-conjunction with ConvertType to convert all the images to JPEG.
-@example
-$ parallel astconvertt -ojpg ::: *_crop.fits
-@end example
-
-@pindex eog
-@cindex Eye of GNOME
-For his graphical user interface Hubble is using GNOME which is the default
-in most distributions in GNU/Linux. The basic image viewer in GNOME is the
-Eye of GNOME, which has the executable file name @command{eog}
-@footnote{Eye of GNOME is only available for users of the GNOME graphical
-desktop environment which is the default in most GNU/Linux
-distributions. If you use another graphical desktop environment, replace
-@command{eog} with any other image viewer.}. Since he has used it before,
-he knows that once it opens an image, he can use the @key{ENTER} or
-@key{SPACE} keys on the keyboard to go to the next image in the directory
-or the @key{Backspace} key to go the previous image. So he opens the image
-of the first object with the command below and with his cup of coffee in
-his other hand, he flips through his targets very fast to get a good
-initial impression of the morphologies of these extra-galactic nebulae.
-
-@example
-$ eog 1_crop.jpg
-@end example
-
-@cindex GNU Bash
-@cindex GNU Emacs
-@cindex Spiral galaxies
-@cindex Elliptical galaxies
-Hubble's cup of coffee is now finished and he also got a nice general
-impression of the shapes of the nebulae. He tentatively/mentally
-classified the objects into three classes while doing the visual
-inspection. One group of the nebulae have a very simple elliptical
-shape and seem to have no internal special structure, so he gives them
-code 1. Another clearly different class are those which have spiral
-arms which he associates with code 2 and finally there seems to be a
-class of nebulae in between which appear to have a disk but no spiral
-arms, he gives them code 3.
-
-Now he wants to know how many of the nebulae in his extra-galactic sample
-are within each class. Repeating the same process above and writing the
-results on paper is very time consuming and prone to errors. Fortunately
-Hubble knows the basics of GNU Bash shell programming, so he writes the
-following short script with a loop to help him with the job. After all,
-computers are made for us to operate and knowing basic shell programming
-gives Hubble this ability to creatively operate the computer as he
-wants. So using GNU Emacs@footnote{This can be done with any text editor}
-(his favorite text editor) he puts the following text in a file named
-@file{classify.sh}.
-
-@example
-for name in *.jpg
-do
-    eog $name &
-    processid=$!
-    echo -n "$name belongs to class: "
-    read class
-    echo $name $class >> classified.txt
-    kill $processid
-done
-@end example
-
-@cindex Gedit
-@cindex GNU Emacs
-Fortunately GNU Emacs or even simpler editors like Gedit (part of the
-GNOME graphical user interface) will display the variables and shell
-constructs in different colors which can really help in understanding
-the script. Put simply, the @code{for} loop gets the name of each JPEG
-file in the directory this script is run in and puts it in
-@code{name}. In the shell, the value of a variable is used by putting
-a @code{$} sign before the variable name. Then Eye of GNOME is run on
-the image in the background to show him that image and its process ID
-is saved internally (this is necessary to close Eye of GNOME
-later). The shell then prompts the user to specify a class and after
-saving it in @code{class}, it prints the file name and the given class
-in the next line of a file named @file{classified.txt}. To make the
-script executable (so he can run it later any time he wants) he runs:
-
-@example
-$ chmod +x classify.sh
-@end example
-
-@noindent
-Now he is ready to do the classification, so he runs the script:
-
-@example
-$ ./classify.sh
-@end example
-
-@noindent
-In the end he can delete all the JPEG and FITS files along with Crop's log
-file with the following short command. The only files remaining are the
-script and the result of the classification.
-
-@example
-$ rm *.jpg *.fits astcrop.txt
-$ ls
-classified.txt   classify.sh
-@end example
-
-@noindent
-He can now use @file{classified.txt} as input to a plotting program to
-plot the histogram of the classes and start making interpretations
-about what these nebulous objects that are outside of the Galaxy are.
-
-
 
-@node Sufi simulates a detection, General program usage tutorial, Hubble 
visually checks and classifies his catalog, Tutorials
+@node Sufi simulates a detection, General program usage tutorial, Tutorials, 
Tutorials
 @section Sufi simulates a detection
 
 It is the year 953 A.D.  and Sufi@footnote{Abd al-rahman Sufi (903 --
@@ -1999,8 +1830,8 @@ technique.  The general outline of the steps he wants to 
take are:
 @enumerate
 
 @item
-Make some mock profiles in an oversampled image. The initial mock
-image has to be oversampled prior to convolution or other forms of
+Make some mock profiles in an over-sampled image. The initial mock
+image has to be over-sampled prior to convolution or other forms of
 transformation in the image. Through his experiences, Sufi knew that
 this is because the image of heavenly bodies is actually transformed
 by the atmosphere or other sources outside the atmosphere (for example
@@ -2010,7 +1841,7 @@ should do all the work on a finer pixel grid. In the end 
he can
 re-sample the result to the initially desired grid size.
 
 @item
-Convolve the image with a PSF image that is oversampled to the same
+Convolve the image with a PSF image that is over-sampled to the same
 value as the mock image. Since he wants to finish in a reasonable time
 and the PSF kernel will be very large due to oversampling, he has to
 use frequency domain convolution which has the side effect of dimming
@@ -2192,7 +2023,7 @@ and showed the effect of convolution to his student and 
explained to him
 how a PSF with a larger FWHM would make the points even wider. With the
 convolved image ready, they were prepared to re-sample it to the original
 pixel scale Sufi had planned [from the @command{$ astmkprof -P} command
-above, recall that MakeProfiles had oversampled the image by 5 times]. Sufi
+above, recall that MakeProfiles had over-sampled the image by 5 times]. Sufi
 explained the basic concepts of warping the image to his student and ran
 Warp with the following command:
 
@@ -2397,7 +2228,7 @@ catalog). It was nearly sunset and they had to begin 
preparing for the
 night's measurements on the ecliptic.
 
 
-@node General program usage tutorial,  , Sufi simulates a detection, Tutorials
+@node General program usage tutorial, Detecting large extended targets, Sufi 
simulates a detection, Tutorials
 @section General program usage tutorial
 
 @cindex HST
@@ -2426,8 +2257,10 @@ AWK@footnote{@url{https://www.gnu.org/software/gawk}.}).
 @cartouche
 @noindent
 @strong{Type the example commands:} Try to type the example commands on
-your terminal and don't simply copy and paste them. This will help simulate
-future situations when you are processing your own datasets.
+your terminal and use the history feature of your command-line (by pressing
+the ``up'' button to retrieve previous commands). Don't simply copy and
+paste the commands shown here. This will help simulate future situations
+when you are processing your own datasets.
 @end cartouche
 
 A handy feature of Gnuastro is that all program names start with
@@ -3410,7 +3243,7 @@ the same Segment and MakeCatalog calls above for the 
F105W filter, we are
 going to get a different number of objects and clumps. Matching the two
 catalogs is possible (for example with @ref{Match}), but the fact that the
 measurements will be done on different pixels, can bias the result. Since
-the Point Spread Function (PSF) of both images is very similar, an accurate
+the Point spread function (PSF) of both images is very similar, an accurate
 color calculation can only be done when magnitudes are measured from the
 same pixels on both images.
 
@@ -3552,125 +3385,978 @@ the previous MakeCatalog call, you will notice that 
there is no more
 object.
 
 @example
-$ astmkcatalog apertures.fits -h1 --zeropoint=26.27        \
-               --valuesfile=nc/xdf-f105w.fits              \
-               --ids --ra --dec --magnitude --sn           \
-               --output=cat/xdf-f105w-aper.fits
+$ astmkcatalog apertures.fits -h1 --zeropoint=26.27        \
+               --valuesfile=nc/xdf-f105w.fits              \
+               --ids --ra --dec --magnitude --sn           \
+               --output=cat/xdf-f105w-aper.fits
+@end example
+
+This catalog has the same number of rows as the catalog produced from
+clumps, therefore similar to how we found colors, you can compare the
+aperture and clump magnitudes for example. You can also change the filter
+name and zeropoint magnitudes and run this command again to have the fixed
+aperture magnitude in the F160W filter and measure colors on apertures.
+
+@cindex GNU AWK
+Let's find some of the objects with the strongest color difference and make
+a cutout to inspect them visually: let's see what the objects with a color
+more than two magnitudes look like. We'll use the
+@file{cat/xdf-f105w-f160w_c.txt} file that we made above. With the command
+below, all lines with a color more than 1.5 will be put in @file{reddest.txt}
+
+@example
+$ awk '$5>1.5' cat/xdf-f105w-f160w_c.txt > red.txt
+@end example
+
+We can now feed @file{red.txt} into Gnuastro's crop to see what these
+objects look like. To keep things clean, we'll make a directory called
+@file{crop-red} and ask Crop to save the crops in this directory. We'll
+also add a @file{-f160w.fits} suffix to the crops (to remind us which image
+they came from).
+
+@example
+$ mkdir crop-red
+$ astcrop --mode=wcs --coordcol=3 --coordcol=4 flat-ir/xdf-f160w.fits \
+          --catalog=red.txt --width=15/3600,15/3600                   \
+          --suffix=-f160w.fits --output=crop-red
+@end example
+
+Like the MakeProfiles command above, you might notice that the crops aren't
+made in order. This is because each crop is independent of the rest,
+therefore crops are done in parallel, and parallel operations are
+asynchronous. In the command above, you can change @file{f160w} to
+@file{f105w} to make the crops in both filters.
+
+To view the crops more easily (not having to open ds9 for each image), you
+can convert the FITS crops into the JPEG format with a shell loop like
+below.
+
+@example
+$ cd crop-red
+$ for f in *.fits; do                                                  \
+    astconvertt $f --fluxlow=-0.001 --fluxhigh=0.005 --invert -ojpg;   \
+  done
+$ cd ..
+@end example
+
+You can now easily use your general graphic user interface image viewer to
+flip through the images more easily. On GNOME, you can use the ``Eye of
+GNOME'' image viewer (with executable name of @file{eog}). Run the command
+below and by pressing the @key{<SPACE>} key, you can flip through the
+images and compare them visually more easily. Of course, the flux ranges
+have been chosen generically here for seeing the fainter parts. Therefore,
+brighter objects will be fully black.
+
+@example
+$ eog 1-f160w.jpg
+@end example
+
+@cindex GNU Parallel
+The @code{for} loop above to convert the images will do the job in series:
+each file is converted only after the previous ones are complete. If you
+have @url{https://www.gnu.org/software/parallel, GNU Parallel}, you can
+greatly speed up this conversion. GNU Parallel will run the separate
+commands simultaneously on different CPU threads in parallel. For more
+information on efficiently using your threads, see @ref{Multi-threaded
+operations}. Here is a replacement for the shell @code{for} loop above
+using GNU Parallel.
+
+@example
+$ cd crop-red
+$ parallel astconvertt --fluxlow=-0.001 --fluxhigh=0.005 --invert      \
+           -ojpg ::: *.fits
+$ cd ..
+@end example
+
+Another thing that is commonly needed is to visually mark these objects on
+the image. DS9 has the ``Region''s concept for this purpose. You just have
+to convert your catalog into a ``region file'' to feed into DS9. To do
+that, you can use AWK again as shown below.
+
+@example
+$ awk 'BEGIN@{print "# Region file format: DS9 version 4.1";     \
+             print "global color=green width=2";                \
+             print "fk5";@}                                      \
+       @{printf "circle(%s,%s,1\")\n", $3, $4;@}' reddest.txt     \
+       > reddest.reg
+@end example
+
+This region file can be loaded into DS9 with its @option{-regions} option
+to display over any image (that has world coordinate system). In the
+example below, we'll open Segment's output and load the regions over all
+the extensions (to see the image and the respective clump):
+
+@example
+$ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit    \
+      -regions load all reddest.reg
+@end example
+
+Finally, if this book or any of the programs in Gnuastro have been useful
+for your research, please cite the respective papers and share your
+thoughts and suggestions with us (it can be very encouraging). All Gnuastro
+programs have a @option{--cite} option to help you cite the authors' work
+more easily. Just note that it may be necessary to cite additional papers
+for different programs, so please try it out for any program you use.
+
+@example
+$ astmkcatalog --cite
+$ astnoisechisel --cite
+@end example
+
+
+
+
+
+
+
+
+
+
+@node Detecting large extended targets, Hubble visually checks and classifies 
his catalog, General program usage tutorial, Tutorials
+@section Detecting large extended targets
+
+The outer wings of large and extended objects can sink into the noise very
+gradually and can have a large variety of shapes (for example due to tidal
+interactions). Therefore separating the outer boundaries of the galaxies
+from the noise can be particularly tricky. Besides causing an
+under-estimation in the total estimated brightness of the target, failure
+to detect such faint wings will also cause a bias in the noise
+measurements, thereby hampering the accuracy of any measurement on the
+dataset. Therefore even if they don't constitute a significant fraction of
+the target's light, or aren't your primary target, these regions must not
+be ignored. In this tutorial, we'll walk you through the strategy of
+detecting such targets using @ref{NoiseChisel}.
+
+@cartouche
+@noindent
+@strong{Don't start with this tutorial:} If you haven't already completed
+@ref{General program usage tutorial}, we strongly recommend going through
+that tutorial before starting this one. Basic features like access to this
+book on the command-line, the configuration files of Gnuastro's programs,
+benefiting from the modular nature of the programs, viewing multi-extension
+FITS files, or using NoiseChisel's outputs are discussed in more detail
+there.
+@end cartouche
+
+@cindex M51
+@cindex NGC5195
+@cindex SDSS, Sloan Digital Sky Survey
+@cindex Sloan Digital Sky Survey, SDSS
+We'll try to detect the faint tidal wings of the beautiful M51
+group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this
+tutorial. We'll use a dataset/image from the public
+@url{http://www.sdss.org/, Sloan Digital Sky Survey}, or SDSS. Due to its
+more peculiar low surface brightness structure/features, we'll focus on the
+dwarf companion galaxy of the group (or NGC 5195). To get the image, you
+can use SDSS's @url{https://dr12.sdss.org/fields, Simple field search}
+tool. As long as it is covered by the SDSS, you can find an image
+containing your desired target either by providing a standard name (if it
+has one), or its coordinates. To access the dataset we will use here, write
+@code{NGC5195} in the ``Object Name'' field and press ``Submit'' button.
+
+@cartouche
+@noindent
+@strong{Type the example commands:} Try to type the example commands on
+your terminal and use the history feature of your command-line (by pressing
+the ``up'' button to retrieve previous commands). Don't simply copy and
+paste the commands shown here. This will help simulate future situations
+when you are processing your own datasets.
+@end cartouche
+
+@cindex GNU Wget
+You can see the list of available filters under the color image. For this
+demonstration, we'll use the r-band filter image.  By clicking on the
+``r-band FITS'' link, you can download the image. Alternatively, you can
+just run the following command to download it with GNU Wget@footnote{To
+make the command easier to view on screen or in a page, we have defined the
+top URL of the image as the @code{topurl} shell variable. You can just
+replace the value of this variable with @code{$topurl} in the
+@command{wget} command.}. To keep things clean, let's also put it in a
+directory called @file{ngc5195}. With the @option{-O} option, we are asking
+Wget to save the downloaded file with a more manageable name:
+@file{r.fits.bz2} (this is an r-band image of NGC 5195, which was the
+directory name).
+
+@example
+$ mkdir ngc5195
+$ cd ngc5195
+$ topurl=https://dr12.sdss.org/sas/dr12/boss/photoObj/frames
+$ wget $topurl/301/3716/6/frame-r-003716-6-0117.fits.bz2 -Or.fits.bz2
+@end example
+
+@cindex Bzip2
+@noindent
+This server keeps the files in a Bzip2 compressed file format. So we'll
+first decompress it with the following command. By convention, compression
+programs delete the original file (compressed when un-compressing, or
+un-compressed when compressing). To keep the original file, you can use the
+@option{--keep} or @option{-k} option which is available in most
+compression programs for this job. Here, we don't need the compressed file
+any more, so we'll just let @command{bunzip} delete it for us and keep the
+directory clean.
+
+@example
+$ bunzip2 r.fits.bz2
+@end example
+
+Let's see how NoiseChisel operates on it with its default parameters:
+
+@example
+$ astnoisechisel r.fits -h0
+@end example
+
+As described in @ref{NoiseChisel output}, NoiseChisel's default output is a
+multi-extension FITS file. A method to view them effectively and easily is
+discussed in @ref{Viewing multiextension FITS images}.
+
+Open the output @file{r_detected.fits} file and you will immediately notice
+how NoiseChisel's default configuration is not suitable for this dataset:
+the Sky estimation has failed so terribly that the tile grid (where the Sky
+was estimated, and subtracted) is visible in the first extension (input
+dataset subtracted by the Sky value). If you look into the third and fourth
+extensions (the Sky and its standard deviation) you will see how they
+exactly map NGC 5195! This is not good! There shouldn't be any signature of
+your extended target on the Sky and its standard deviation images. After
+all, the Sky is suppose to be the average value @emph{in the absence} of
+signal, see @ref{Sky value}.
+
+The fact that signal has been detected as Sky shows that you haven't done a
+good detection. Generally, any time your target is much larger than the
+tile size and the signal is almost flat (like this case), this @emph{will}
+happen, even if it isn't dramatic enough to be seen in the first
+extension. Therefore, @strong{the best place} to check the accuracy of
+your detection is the noise extensions (third and fourth extension) of
+NoiseChisel's output.
+
+When dominated by the background, noise has a symmetric
+distribution. However, signal is not symmetric (we don't have negative
+signal). Therefore when non-constant signal is present in a noisy dataset,
+the distribution will be positively skewed. This skewness is a good measure
+of how much signal we have in the distribution. The skewness can be
+accurately measured by the difference in the mode and median, for more see
+@ref{Quantifying signal in a tile}, and Appendix C
+@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+
+Skewness is only a proxy for signal when the signal has structure (varies
+per pixel). Therefore, when it is approximately constant over a whole tile,
+or sub-set of the image, the signal's effect is just to shift the symmetric
+center of the noise distribution to the positive and there won't be any
+skewness: this positive@footnote{In processed images, where the Sky value
+can be over-estimated, this constant shift can be negative.}  shift that
+preserves the symmetric distribution is the Sky value. When there is a
+gradient over the dataset, different tiles will have different constant
+shifts/Sky-values, for example see Figure 11 of
+@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+
+To get less scatter in measuring the mode and median (and thus better
+estimate the skewness), you will need a larger tile. In Gnuastro, you can
+see the option values (@option{--tilesize} in this case) by adding the
+@option{-P} option to your last command. Try it. You can clearly see that
+the default tile size is indeed much smaller than this (huge) dwarf
+galaxy. Therefore NoiseChisel was unable to identify the skewness within
+the tiles under NGC 5159. Recall that NoiseChisel only uses tiles with no
+signal/skewness to define its threshold. Because of this, the threshold has
+been over-estimated on those tiles and further exacerbated the
+non-detection of the diffuse regions. To see which tiles were used for
+estimating the quantile threshold (no skewness was measured), you can use
+NoiseChisel's @option{--checkqthresh} option:
+
+@example
+$ astnoisechisel r.fits -h0 --checkqthresh
+@end example
+
+Notice how this option doesn't allow NoiseChisel to finish. NoiseChisel
+aborted after finding the quantile thresholds. When you call any of
+NoiseChisel's @option{--check*} options, by default, it will abort as soon
+as all the check steps have been written in the check file (a
+multi-extension FITS file). To optimize the threshold-related settings for
+this image, we'll be playing with this tile for the majority of this
+tutorial. So let's have a closer look at it.
+
+The first extension of @file{r_qthresh.fits} (@code{CONVOLVED}) is the
+convolved input image (where the threshold is defined and applied), for
+more on the effect of convolution and thresholding, see Sections 3.1.1 and
+3.1.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
+[2015]}. The second extension (@code{QTHRESH_ERODE}) has a blank value for
+all the pixels of any tile that was identified as having significant
+signal. Playing a little with the dynamic range of this extension, you
+clearly see how the non-blank tiles around NGC 5195 have a gradient. You do
+not want this behavior. The ultimate purpose of the next few trials will be
+to remove the gradient from the non-blank tiles.
+
+The next two extensions (@code{QTHRESH_NOERODE} and @code{QTHRESH_EXPAND})
+are for later steps in NoiseChisel. The same tiles are masked, but those
+with a value, have a different value compared to @code{QTHRESH_ERODE}. In
+the subsequent three extensions, you can see how the blank tiles are
+filled/interpolated. The subsequent three extensions show the smoothed tile
+values. Finally in the last extension (@code{QTHRESH-APPLIED}), you can see
+the effect of applying @code{QTHRESH_ERODE} on @code{CONVOLVED} (pixels
+with a value of 0 were below the threshold).
+
+@cartouche
+@noindent
+@strong{Skipping convolution for faster tests:} The slowest step of
+NoiseChisel is the convolution of the input dataset. Therefore when your
+dataset is large (unlike the one in this test), and you are not changing
+the input dataset or kernel in multiple runs (as in the tests of this
+tutorial), it is faster to do the convolution separately once (using
+@ref{Convolve}) and use NoiseChisel's @option{--convolved} option to
+directly feed the convolved image and avoid convolution. For more on
+@option{--convolved}, see @ref{NoiseChisel input}.
+@end cartouche
+
+Fortunately this image is large and has a nice and clean region also
+(filled with very small/distant stars and galaxies). So our first solution
+is to increase the tile size. To identify the skewness caused by NGC 5195
+on the tiles under it, we thus have to choose a tile size that is larger
+than the gradient of the signal. Let's try a 100 by 100 tile size:
+
+@example
+$ astnoisechisel r.fits -h0 --tilesize=100,100 --checkqthresh
+@end example
+
+You can clearly see the effect of this increased tile size: they are much
+larger. Nevertheless the higher valued tiles are still systematically
+surrounding NGC 5195. As a result, when flipping through the interpolated
+and smoothed values, you can still see the signature of the galaxy, and the
+ugly tile signatures are still present in @code{QTHRESH-APPLIED}. So let's
+increase the tile size even further (check the result of the first before
+going to the second):
+
+@example
+$ astnoisechisel r.fits -h0 --tilesize=150,150 --checkqthresh
+$ astnoisechisel r.fits -h0 --tilesize=200,200 --checkqthresh
+@end example
+
+The number of tiles with a gradient does indeed decrease with a larger tile
+size, but you still see a gradient in the raw values. The tile signatures
+in the thresholded image are also still present. These are not a good
+sign. So, let's keep the 200 by 200 tile size and start playing with the
+other constraint that we have: the acceptable distance (in quantile),
+between the mode and median.
+
+The tile size is now very large (16 times the area of the default tile
+size). We thus have much less scatter in the estimation of the mode and
+median and we can afford to decrease the acceptable distance between the
+two. The acceptable distance can be set through the @option{--modmedqdiff}
+option (read as ``mode-median quantile difference''). Before trying the
+next command, run the previous command with a @option{-P} to see what value
+it originally had.
+
+@example
+$ astnoisechisel r.fits -h0 --tilesize=200,200 --modmedqdiff=0.005    \
+                 --checkqthresh
+@end example
+
+But this command doesn't finish like the previous ones. A
+@file{r_qthresh.fits} file was created, but instead of saying that the
+quantile thresholds have been applied, a long error message is
+printed. Please read the error message before continuing to read here.
+
+The error message fully describes the problem and even proposes
+solutions. As suggested there, the ideal solution would be to use SDSS
+images outside of this field and add them to this one to have a larger
+input image (with a larger area outside the diffuse regions). But we don't
+always have this luxury, so let's keep using to this image for the rest of
+this tutorial.
+
+First, open @file{r_qthresh.fits} and have a look at the successful
+tiles. Unlike the previous @option{--checkqthresh} outputs, this one only
+has four extensions: as the error message explains, the interpolation (to
+give values to blank tiles) has not been done. Therefore its check results
+aren't present.
+
+At the start of the error message, NoiseChisel tells us how many tiles
+passed the test for having no significant signal: six. Looking closely at
+the dataset, we see that outside NGC 5195, there is no background gradient
+(the background is a fixed value). Our tile sizes are also very large (and
+thus don't have much scatter). So a good way to loosen up the parameters
+can be to simply decrease the number of neighboring tiles needed for
+interpolation, with @option{--interpnumngb} (read as ``interpolation number
+of neighbors'').
+
+@example
+$ astnoisechisel r.fits -h0 --tilesize=200,200 --modmedqdiff=0.005    \
+                 --interpnumngb=6 --checkqthresh
+@end example
+
+There is no longer any significant gradient in the usable tiles and no
+signature of NGC 5195 exists in the interpolated and smoothed values. But
+before finishing the quantile threshold, let's have a closer look at the
+thresholded image in @code{QTHRESH-APPLIED}. Slide the dynamic range in
+your FITS viewer so 0 valued pixels are black and all non-zero pixels are
+white. You will see that the black holes are not evenly distributed. Those
+that follow the tail of NGC 5195 are systematically smaller than those in
+the far-right of the image. This suggests that we can decrease the quantile
+threshold (@code{--qthresh}) even further: there is still signal down
+there!
+
+@example
+$ rm r_qthresh.fits
+$ astnoisechisel r.fits -h0 --tilesize=200,200 --modmedqdiff=0.005    \
+                 --interpnumngb=6 --qthresh=0.2
+@end example
+
+Since the quantile threshold of the previous command was satisfactory, we
+finally removed @option{--checkqthresh} to let NoiseChisel proceed until
+completion. Looking at the @code{DETECTIONS} extension of NoiseChisel's
+output, we see the right-ward edges in particular have many holes that are
+fully surrounded by signal and the signal stretches out in the noise very
+thinly. This suggests that there is still signal that can be detected. So
+we'll decrease the growth quantile (for larger/deeper growth into the
+noise, with @option{--detgrowquant}) and increase the size of holes that
+can be filled (if they are fully surrounded by signal, with
+@option{--detgrowmaxholesize}). Since we are done with our detection, to
+facilitate later steps, we'll also add the @option{--label} option so the
+connected regions get different labels.
+
+@example
+$ astnoisechisel r.fits -h0 --tilesize=200,200 --modmedqdiff=0.005    \
+                 --interpnumngb=6 --qthresh=0.2 --detgrowquant=0.6    \
+                 --detgrowmaxholesize=10000 --label
+@end example
+
+Looking into the output, we now clearly see that the tidal features of M51
+and NGC 5195 are detected nicely in the same direction as expected (towards
+the bottom right side of the image). However, as discussed above, the best
+measure of good detection is the noise, not the detections themselves. So
+let's look at the Sky and its Standard deviation. The Sky standard
+deviation no longer has any footprint of NGC 5195. But the Sky still has a
+very faint shadow of the dwarf galaxy (the values on the left are larger
+than on the right). However, this gradient in the Sky (output of first
+command below) is much less (by @mymath{\sim20} times) than the standard
+deviation (output of second command). So we can stop playing with
+NoiseChisel here, and leave the configuration for a more accurate detection
+to you.
+
+@example
+$ aststatistics r_detected.fits -hSKY --maximum --minimum       \
+                | awk '@{print $1-$2@}'
+$ aststatistics r_detected.fits -hSKY_STD --mean
+@end example
+
+Let's see how deeply/successfully we carved out M51 and NGC 5195's tail
+from the noise. For this measurement, we'll need to estimate the average
+flux on the outer edges of the detection. Fortunately all this can be done
+with a few simple commands (and no higher-level language mini-environments)
+using @ref{Arithmetic} and @ref{MakeCatalog}.
+
+The M51 group detection is by far the largest detection in this image. We
+can thus easily find the ID/label that corresponds to it. We'll first run
+MakeCatalog to find the area of all the detections, then we'll use AWK to
+find the ID of the largest object and keep it as a shell variable
+(@code{id}):
+
+@example
+$ astmkcatalog r_detected.fits --ids --geoarea -hDETECTIONS -ocat.txt
+$ id=$(awk '!/^#/@{if($2>max) @{id=$1; max=$2@}@} END@{print id@}' cat.txt)
+@end example
+
+To separate the outer edges of the detections, we'll need to ``erode'' the
+detections. We'll erode two times (one time would be too thin for such a
+huge object), using a maximum connectivity of 2 (8-connected
+neighbors). We'll then save the output in @file{eroded.fits}.
+
+@example
+$ astarithmetic r_detected.fits 0 gt 2 erode -hDETECTIONS -oeroded.fits
+@end example
+
+@noindent
+We should now just keep the pixels that have the ID of the M51 group, but a
+value of 0 in @file{erode.fits}. We'll keep the output in
+@file{boundary.fits}.
+
+@example
+$ astarithmetic r_detected.fits $id eq eroded.fits 0 eq and     \
+                -hDETECTIONS -h1 -oboundary.fits
+@end example
+
+Open the image and have a look. You'll see that the detected edge of the
+M51 group is now clearly visible. You can use @file{boundary.fits} to mark
+(set to blank) this boundary on the input image and get a visual feeling of
+how far it extends:
+
+@example
+$ astarithmetic r.fits boundary.fits nan where -ob-masked.fits -h0
+@end example
+
+To quantify how deep we have detected the low-surface brightness regions,
+we'll use the command below. In short it just divides all the 1-valued
+pixels of @file{boundary.fits} in the Sky subtracted input (first extension
+of NoiseChisel's output) by the pixel standard deviation of the same
+pixel. This will give us a signal-to-noise ratio image. The mean value of
+this image shows the level of surface brightness that we have achieved.
+
+You can also break the command below into multiple calls to Arithmetic and
+create temporary files to understand it better. However, if you have a look
+at @ref{Reverse polish notation} and @ref{Arithmetic operators}, you should
+be able to easily understand what your computer does when you run this
+command@footnote{@file{boundary.fits} (extension @code{1}) is a binary (0
+or 1 valued) image. Applying the @code{not} operator on it, just flips all
+its pixels. Through the @code{where} operator, we are setting all the newly
+1-valued pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY})
+to NaN/blank. In the second line, we are dividing all the non-blank values
+by @file{r_detected.fits} (extension @code{SKY_STD}). This gives the
+signal-to-noise ratio for each of the pixels on the boundary. Finally, with
+the @code{meanvalue} operator, we are taking the mean value of all the
+non-blank pixels and reporting that as a single number.}.
+
+@example
+$ astarithmetic r_detected.fits boundary.fits not nan where \
+                r_detected.fits /                           \
+                meanvalue                                   \
+                -hINPUT-NO-SKY -h1 -hSKY_STD --quiet
+--> 0.0511864
+@end example
+
+@noindent
+The outer wings where therefore non-parametrically detected until
+@mymath{\rm{S/N}\approx0.05}.
+
+In interpreting this value, you should just have in mind that NoiseChisel
+works based on the contiguity of signal in the pixels. Therefore the larger
+the object, the deeper NoiseChisel can carve it out of the noise. In other
+words, this reported depth, is only for this particular object and dataset,
+processed with this particular NoiseChisel configuration: if the M51 group
+in this image was larger/smaller than this, or if the image was
+larger/smaller, or if we had used a different configuration, we would go
+deeper/shallower.
+
+@cartouche
+@noindent
+@strong{The NoiseChisel configuration found here is NOT GENERIC for any
+large object:} As you saw above, the reason we chose this particular
+configuration for NoiseChisel to detect the wings of the M51 group was
+strongly influenced by this particular object in this particular
+image. When signal takes over such a large fraction of your dataset, you
+will need some manual checking, intervention, or customization, to make
+sure that it is successfully detected. In other words, to make sure that
+your noise measurements are least affected by the signal@footnote{In the
+future, we may add capabilities to optionally automate some of the choices
+made here, please join us in doing this if you are interested. However,
+given the many problems in existing ``smart'' solutions, such automatic
+changing of the configuration may cause more problems than they solve. So
+even when they are implemented, we would strongly recommend manual checks
+and intervention for a robust analysis.}.
+@end cartouche
+
+To avoid typing all these options every time you run NoiseChisel on this
+image, you can use Gnuastro's configuration files, see @ref{Configuration
+files}. For an applied example of setting/using them, see @ref{General
+program usage tutorial}.
+
+To continue your analysis of such datasets with extended emission, you can
+use @ref{Segment} to identify all the ``clumps'' over the diffuse regions:
+background galaxies and foreground stars.
+
+@example
+$ astsegment r_detected.fits
+@end example
+
+@cindex DS9
+@cindex SAO DS9
+Open the output @file{r_detected_segmented.fits} as a multi-extension data
+cube like before and flip through the first and second extensions to see
+the detected clumps (all pixels with a value larger than 1). To optimize
+the parameters and make sure you have detected what you wanted, its highly
+recommended to visually inspect the detected clumps on the input image.
+
+For visual inspection, you can make a simple shell script like below. It
+will first call MakeCatalog to estimate the positions of the clumps, then
+make an SAO ds9 region file and open ds9 with the image and region
+file. Recall that in a shell script, the numeric variables (like @code{$1},
+@code{$2}, and @code{$3} in the example below) represent the arguments
+given to the script. But when used in the AWK arguments, they refer to
+column numbers.
+
+To create the shell script, using your favorite text editor, put the
+contents below into a file called @file{check-clumps.sh}. Recall that
+everything after a @code{#} is just comments to help you understand the
+command (so read them!). Also note that if you are copying from the PDF
+version of this book, fix the single quotes in the AWK command.
+
+@example
+#! /bin/bash
+set -e    # Stop execution when there is an error.
+set -u    # Stop execution when a variable is not initialized.
+
+# Run MakeCatalog to write the coordinates into a FITS table.
+# Default output is `$1_cat.fits'.
+astmkcatalog $1.fits --clumpscat --ids --ra --dec
+
+# Use Gnuastro's Table program to read the RA and Dec columns of the
+# clumps catalog (in the `CLUMPS' extension). Then pipe the columns
+# to AWK for saving as a DS9 region file.
+asttable $1"_cat.fits" -hCLUMPS -cRA,DEC                               \
+         | awk 'BEGIN @{ print "# Region file format: DS9 version 4.1"; \
+                        print "global color=green width=1";            \
+                        print "fk5" @}                                  \
+                @{ printf "circle(%s,%s,1\")\n", $1, $2 @}' > $1.reg
+
+# Show the image (with the requested color scale) and the region file.
+ds9 -geometry 1800x3000 -mecube $1.fits -zoom to fit                   \
+    -scale limits $2 $3 -regions load all $1.reg
+
+# Clean up (delete intermediate files).
+rm $1"_cat.fits" $1.reg
+@end example
+
+@noindent
+Finally, you just have to activate its executable flag with the command
+below. This will enable you to directly call the script as a command.
+
+@example
+$ chmod +x check-clumps.sh
+@end example
+
+This script doesn't expect the @file{.fits} suffix of the input's filename
+as the first argument. Because the script produces intermediate files (a
+catalog and DS9 region file, which are later deleted). However, we don't
+want multiple instances of the script (on different files in the same
+directory) to collide (read/write to the same intermediate
+files). Therefore, we have used suffixes added to the input's name to
+identify the intermediate files. Note how all the @code{$1} instances in
+the commands (not within the AWK command where @code{$1} refers to the
+first column) are followed by a suffix. If you want to keep the
+intermediate files, put a @code{#} at the start of the last line.
+
+The few, but high-valued, bright pixels in the central parts of the
+galaxies can hinder easy visual inspection of the fainter parts of the
+image. With the second and third arguments to this script, you can set the
+numerical values of the color map (first is minimum/black, second is
+maximum/white). You can call this script with any@footnote{Some
+modifications are necessary based on the input dataset: depending on the
+dynamic range, you have to adjust the second and third arguments. But more
+importantly, depending on the dataset's world coordinate system, you have
+to change the region @code{width}, in the AWK command. Otherwise the circle
+regions can be too small/large.} output of Segment (when
+@option{--rawoutput} is @emph{not} used) with a command like this:
+
+@example
+$ ./check-clumps.sh r_detected_segmented -0.1 2
+@end example
+
+Go ahead and run this command. You will see the intermediate processing
+being done and finally it opens SAO DS9 for you with the regions
+superimposed on all the extensions of Segment's output. The script will
+only finish (and give you control of the command-line) when you close
+DS9. If you need your access to the command-line before closing DS9, you
+can add a @code{&} after the end of the command above.
+
+@cindex Purity
+@cindex Completeness
+While DS9 is open, slide the dynamic range (values for black and white, or
+minimum/maximum values in different color schemes) and zoom into various
+regions of the M51 group to see if you are satisfied with the detected
+clumps. Don't forget that through the ``Cube'' window that is opened along
+with DS9, you can flip through the extensions and see the actual clumps
+also. The questions you should be asking your self are these: 1) Which real
+clumps (as you visually @emph{feel}) have been missed? In other words, is
+the @emph{completeness} good? 2) Are there any clumps which you @emph{feel}
+are false? In other words, is the @emph{purity} good?
+
+Note that completeness and purity are not independent of each other, they
+are anti-correlated: the higher your purity, the lower your completeness
+and vice-versa. You can see this by playing with the purity level using the
+@option{--snquant} option. Run Segment as shown above again with @code{-P}
+and see its default value. Then increase/decrease it for higher/lower
+purity and check the result as before. You will see that if you want the
+best purity, you have to sacrifice completeness and vice versa.
+
+One interesting region to inspect in this image is the many bright peaks
+around the central parts of M51. Zoom into that region and inspect how many
+of them have actually been detected as true clumps, do you have a good
+balance between completeness and purity? Also look out far into the wings
+of the group and inspect the completeness and purity there.
+
+An easer way to inspect completness (and only completeness) is to mask all
+the pixels detected as clumps and see what is left over. You can do this
+with a command like below. For easy reading of the command, we'll define
+the shell variable @code{i} for the image name and save the output in
+@file{masked.fits}.
+
+@example
+$ i=r_detected_segmented.fits
+$ astarithmetic $i $i 0 gt nan where -hINPUT -hCLUMPS -omasked.fits
+@end example
+
+Inspecting @file{masked.fits}, you can see some very diffuse peaks that
+have been missed, especially as you go farther away from the group center
+and into the diffuse wings. This is due to the fact that with this
+configuration we have focused more on the sharper clumps. To put the focus
+more on diffuse clumps, can use a wider convolution kernel. Using a larger
+kernel can also help in detecting larger clumps (thus better separating
+them from the underlying signal).
+
+You can make any kernel easily using the @option{--kernel} option in
+@ref{MakeProfiles}. But note that a larger kernel is also going to wash-out
+many of the sharp/small clumps close to the center of M51 and also some
+smaller peaks on the wings. Please continue playing with Segment's
+configuration to obtain a more complete result (while keeping reasonable
+purity). We'll finish the discussion on finding true clumps here.
+
+The properties of the background objects can then easily be measured using
+@ref{MakeCatalog}. To measure the properties of the background objects
+(detected as clumps over the diffuse region), you shouldn't mask the
+diffuse region. When measuing clump properties with @ref{MakeCatalog}, the
+ambient flux (from the diffuse region) is calculated and subtracted. If the
+diffuse region is masked, its effect on the clump brightness cannot be
+calculated and subtracted. But to keep this tutorial short, we'll stop
+here. See @ref{General program usage tutorial} and @ref{Segment} for more
+on Segment, producing catalogs with MakeCatalog and using those catalogs.
+
+Finally, if this book or any of the programs in Gnuastro have been useful
+for your research, please cite the respective papers and share your
+thoughts and suggestions with us (it can be very encouraging). All Gnuastro
+programs have a @option{--cite} option to help you cite the authors' work
+more easily. Just note that it may be necessary to cite additional papers
+for different programs, so please try it out for any program you use.
+
+@example
+$ astmkcatalog --cite
+$ astnoisechisel --cite
+@end example
+
+
+
+
+@node Hubble visually checks and classifies his catalog,  , Detecting large 
extended targets, Tutorials
+@section Hubble visually checks and classifies his catalog
+
+@cindex Edwin Hubble
+In 1924 Hubble@footnote{Edwin Powell Hubble (1889 -- 1953 A.D.) was an
+American astronomer who can be considered as the father of
+extra-galactic astronomy, by proving that some nebulae are too distant
+to be within the Galaxy. He then went on to show that the universe
+appears to expand and also done a visual classification of the
+galaxies that is known as the Hubble fork.} announced his discovery
+that some of the known nebulous objects are too distant to be within
+the the Milky Way (or Galaxy) and that they were probably distant
+Galaxies@footnote{Note that at that time, ``Galaxy'' was a proper noun
+used to refer to the Milky way. The concept of a galaxy as we define
+it today had not yet become common. Hubble played a major role in
+creating today's concept of a galaxy.} in their own right. He had also
+used them to show that the redshift of the nebulae increases with
+their distance. So now he wants to study them more accurately to see
+what they actually are. Since they are nebulous or amorphous, they
+can't be modeled (like stars that are always a point) easily. So there
+is no better way to distinguish them than to visually inspect them and
+see if it is possible to classify these nebulae or not.
+
+Hubble has stored all the FITS images of the objects he wants to visually
+inspect in his @file{/mnt/data/images} directory. He has also stored his
+catalog of extra-galactic nebulae in
+@file{/mnt/data/catalogs/extragalactic.txt}. Any normal user on his
+GNU/Linux system (including himself) only has read access to the contents
+of the @file{/mnt/data} directory. He has done this by running this command
+as root:
+
+@example
+# chmod -R 755 /mnt/data
 @end example
 
-This catalog has the same number of rows as the catalog produced from
-clumps, therefore similar to how we found colors, you can compare the
-aperutre and clump magnitudes for example. You can also change the filter
-name and zeropoint magnitudes and run this command again to have the fixed
-aperture magnitude in the F160W filter and measure colors on apertures.
-
-@cindex GNU AWK
-Let's find some of the objects with the strongest color difference and make
-a cutout to inspect them visually: let's see what the objects with a color
-more than two magnitudes look like. We'll use the
-@file{cat/xdf-f105w-f160w_c.txt} file that we made above. With the command
-below, all lines with a color more than 1.5 will be put in @file{reddest.txt}
+@noindent
+Hubble has done this intentionally to avoid mistakenly deleting or
+modifying the valuable images he has taken at Mount Wilson while he is
+working as an ordinary user. Retaking all those images and data is
+simply not an option. In fact they are also in another hard disk
+(@file{/dev/sdb1}). So if the hard disk which stores his GNU/Linux
+distribution suddenly malfunctions due to work load, his data is not
+in harms way. That hard disk is only mounted to this directory when he
+wants to use it with the command:
 
 @example
-$ awk '$5>1.5' cat/xdf-f105w-f160w_c.txt > red.txt
+# mount /dev/sdb1 /mnt/data
 @end example
 
-We can now feed @file{red.txt} into Gnuastro's crop to see what these
-objects look like. To keep things clean, we'll make a directory called
-@file{crop-red} and ask Crop to save the crops in this directory. We'll
-also add a @file{-f160w.fits} suffix to the crops (to remind us which image
-they came from).
+@noindent
+In short, Hubble wants to keep his data safe and fortunately by
+default Gnuastro allows for this.  Hubble creates a temporary
+@file{visualcheck} directory in his home directory for this check. He
+runs the following commands to make the directory and change to
+it@footnote{The @code{pwd} command is short for ``Print Working
+Directory'' and @code{ls} is short for ``list'' which shows the
+contents of a directory.}:
 
 @example
-$ mkdir crop-red
-$ astcrop --mode=wcs --coordcol=3 --coordcol=4 flat-ir/xdf-f160w.fits \
-          --catalog=red.txt --width=15/3600,15/3600                   \
-          --suffix=-f160w.fits --output=crop-red
+$ mkdir ~/visualcheck
+$ cd ~/visualcheck
+$ pwd
+/home/edwin/visualcheck
+$ ls
 @end example
 
-Like the MakeProfiles command above, you might notice that the crops aren't
-made in order. This is because each crop is independent of the rest,
-therefore crops are done in parallel, and parallel operations are
-asynchronous. In the command above, you can change @file{f160w} to
-@file{f105w} to make the crops in both filters.
+Hubble has multiple images in @file{/mnt/data/images}, some of his targets
+might be on the edges of an image and so several images need to be stitched
+to give a good view of them. Also his extra-galactic targets belong to
+various pointings in the sky, so they are not in one large
+image. Gnuastro's Crop is just the program he wants. The catalog in
+@file{extragalactic.txt} is a plain text file which stores the basic
+information of all his known 200 extra-galactic nebulae. If you don't have
+any particular catalog and accompanying image, you can use one the Hubble
+Space Telescope F160W catalog that we produced in @ref{General program
+usage tutorial} along with the accompanying image (specify the exact image
+name, not @file{/mnt/data/images/*.fits}). You can select the brightest
+galaxies for an easier classification.
 
-To view the crops more easily (not having to open ds9 for each image), you
-can convert the FITS crops into the JPEG format with a shell loop like
-below.
+@cindex WCS
+@cindex World coordinate system
+In its second column, the catalog has each object's Right Ascension (the
+first column is a label he has given to each object), and in the third, the
+object's declination (which he specifies with the @option{--coordcol}
+option). Also, since the coordinates are in the world coordinate system
+(WCS, not pixel positions) units, he adds @option{--mode=wcs}.
 
 @example
-$ cd crop-red
-$ for f in *.fits; do                                                  \
-    astconvertt $f --fluxlow=-0.001 --fluxhigh=0.005 --invert -ojpg;   \
-  done
-$ cd ..
+$ astcrop --coordcol=2 --coordcol=3 /mnt/data/images/*.fits     \
+          --mode=wcs /mnt/data/catalogs/extragalactic.txt
+Crop started on Tue Jun  14 10:18:11 1932
+  ---- ./4_crop.fits                  1 1
+  ---- ./2_crop.fits                  1 1
+  ---- ./1_crop.fits                  1 1
+[[[ Truncated middle of list ]]]
+  ---- ./198_crop.fits                1 1
+  ---- ./195_crop.fits                1 1
+  - 200 images created.
+  - 200 were filled in the center.
+  - 0 used more than one input.
+Crop finished in:  2.429401 (seconds)
 @end example
 
-You can now easily use your general graphic user interface image viewer to
-flip through the images more easily. On GNOME, you can use the ``Eye of
-GNOME'' image viewer (with executable name of @file{eog}). Run the command
-below and by pressing the @key{<SPACE>} key, you can flip through the
-images and compare them visually more easily. Of course, the flux ranges
-have been chosen generically here for seeing the fainter parts. Therefore,
-brighter objects will be fully black.
+
+@cindex Asynchronous thread allocation
+@noindent
+Hubble already knows that thread allocation to the the CPU cores is
+asynchronous. Hence each time you run it, the order of which job gets done
+first differs. When using Crop the order of outputs is irrelevant since
+each crop is independent of the rest. This is why the crops are not
+necessarily created in the same input order. He is satisfied with the
+default width of the outputs (which he inspected by running @code{$ astcrop
+-P}). If he wanted a different width for the cropped images, he could do
+that with the @option{--wwidth} option which accepts a value in
+arc-seconds.  When he lists the contents of the directory again he finds
+his 200 objects as separate FITS images.
 
 @example
-$ eog 1-f160w.jpg
+$ ls
+1_crop.fits 2_crop.fits ... 200_crop.fits
 @end example
 
 @cindex GNU Parallel
-The @code{for} loop above to convert the images will do the job in series:
-each file is converted only after the previous ones are complete. If you
-have @url{https://www.gnu.org/software/parallel, GNU Parallel}, you can
-greatly speed up this conversion. GNU Parallel will run the separate
-commands simultaneously on different CPU threads in parallel. For more
-information on efficiently using your threads, see @ref{Multi-threaded
-operations}. Here is a replacement for the shell @code{for} loop above
-using GNU Parallel.
+The FITS image format was not designed for efficient/fast viewing, but
+mainly for accurate storing of the data. So he chooses to convert the
+cropped images to a more common image format to view them more quickly and
+easily through standard image viewers (which load much faster than FITS
+image viewer). JPEG is one of the most recognized image formats that is
+supported by most image viewers. Fortunately Gnuastro has just such a tool
+to convert various types of file types to and from each other:
+ConvertType. Hubble has already heard of GNU Parallel from one of his
+colleagues at Mount Wilson Observatory. It allows multiple instances of a
+command to be run simultaneously on the system, so he uses it in
+conjunction with ConvertType to convert all the images to JPEG.
+@example
+$ parallel astconvertt -ojpg ::: *_crop.fits
+@end example
+
+@pindex eog
+@cindex Eye of GNOME
+For his graphical user interface Hubble is using GNOME which is the default
+in most distributions in GNU/Linux. The basic image viewer in GNOME is the
+Eye of GNOME, which has the executable file name @command{eog}
+@footnote{Eye of GNOME is only available for users of the GNOME graphical
+desktop environment which is the default in most GNU/Linux
+distributions. If you use another graphical desktop environment, replace
+@command{eog} with any other image viewer.}. Since he has used it before,
+he knows that once it opens an image, he can use the @key{ENTER} or
+@key{SPACE} keys on the keyboard to go to the next image in the directory
+or the @key{Backspace} key to go the previous image. So he opens the image
+of the first object with the command below and with his cup of coffee in
+his other hand, he flips through his targets very fast to get a good
+initial impression of the morphologies of these extra-galactic nebulae.
 
 @example
-$ cd crop-red
-$ parallel astconvertt --fluxlow=-0.001 --fluxhigh=0.005 --invert      \
-           -ojpg ::: *.fits
-$ cd ..
+$ eog 1_crop.jpg
 @end example
 
-Another thing that is commonly needed is to visually mark these objects on
-the image. DS9 has the ``Region''s concept for this purpose. You just have
-to convert your catalog into a ``region file'' to feed into DS9. To do
-that, you can use AWK again as shown below.
+@cindex GNU Bash
+@cindex GNU Emacs
+@cindex Spiral galaxies
+@cindex Elliptical galaxies
+Hubble's cup of coffee is now finished and he also got a nice general
+impression of the shapes of the nebulae. He tentatively/mentally
+classified the objects into three classes while doing the visual
+inspection. One group of the nebulae have a very simple elliptical
+shape and seem to have no internal special structure, so he gives them
+code 1. Another clearly different class are those which have spiral
+arms which he associates with code 2 and finally there seems to be a
+class of nebulae in between which appear to have a disk but no spiral
+arms, he gives them code 3.
+
+Now he wants to know how many of the nebulae in his extra-galactic sample
+are within each class. Repeating the same process above and writing the
+results on paper is very time consuming and prone to errors. Fortunately
+Hubble knows the basics of GNU Bash shell programming, so he writes the
+following short script with a loop to help him with the job. After all,
+computers are made for us to operate and knowing basic shell programming
+gives Hubble this ability to creatively operate the computer as he
+wants. So using GNU Emacs@footnote{This can be done with any text editor}
+(his favorite text editor) he puts the following text in a file named
+@file{classify.sh}.
 
 @example
-$ awk 'BEGIN@{print "# Region file format: DS9 version 4.1";     \
-             print "global color=green width=2";                \
-             print "fk5";@}                                      \
-       @{printf "circle(%s,%s,1\")\n", $3, $4;@}' reddest.txt     \
-       > reddest.reg
+for name in *.jpg
+do
+    eog $name &
+    processid=$!
+    echo -n "$name belongs to class: "
+    read class
+    echo $name $class >> classified.txt
+    kill $processid
+done
 @end example
 
-This region file can be loaded into DS9 with its @option{-regions} option
-to display over any image (that has world coordinate system). In the
-example below, we'll open Segment's output and load the regions over all
-the extensions (to see the image and the respective clump):
+@cindex Gedit
+@cindex GNU Emacs
+Fortunately GNU Emacs or even simpler editors like Gedit (part of the
+GNOME graphical user interface) will display the variables and shell
+constructs in different colors which can really help in understanding
+the script. Put simply, the @code{for} loop gets the name of each JPEG
+file in the directory this script is run in and puts it in
+@code{name}. In the shell, the value of a variable is used by putting
+a @code{$} sign before the variable name. Then Eye of GNOME is run on
+the image in the background to show him that image and its process ID
+is saved internally (this is necessary to close Eye of GNOME
+later). The shell then prompts the user to specify a class and after
+saving it in @code{class}, it prints the file name and the given class
+in the next line of a file named @file{classified.txt}. To make the
+script executable (so he can run it later any time he wants) he runs:
 
 @example
-$ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit    \
-      -regions load all reddest.reg
+$ chmod +x classify.sh
 @end example
 
-Finally, if this book or any of the programs in Gnuastro have been useful
-for your research, please cite the respective papers and share your
-thoughts and suggestions with us (it can be very encouraging). All Gnuastro
-programs have a @option{--cite} option to help you cite the authors' work
-more easily. Just note that it may be necessary to cite additional papers
-for different programs, so please try it out for any program you use.
+@noindent
+Now he is ready to do the classification, so he runs the script:
 
 @example
-$ astmkcatalog --cite
-$ astnoisechisel --cite
+$ ./classify.sh
 @end example
 
+@noindent
+In the end he can delete all the JPEG and FITS files along with Crop's log
+file with the following short command. The only files remaining are the
+script and the result of the classification.
+
+@example
+$ rm *.jpg *.fits astcrop.txt
+$ ls
+classified.txt   classify.sh
+@end example
 
+@noindent
+He can now use @file{classified.txt} as input to a plotting program to
+plot the histogram of the classes and start making interpretations
+about what these nebulous objects that are outside of the Galaxy are.
 
 
 
@@ -3722,7 +4408,6 @@ usage of some other free software that are not directly 
required by
 Gnuastro but might be useful in conjunction with it is discussed.
 
 
-
 @menu
 * Dependencies::                Necessary packages for Gnuastro.
 * Downloading the source::      Ways to download the source code.
@@ -13857,14 +14542,20 @@ little with the settings (in the order presented in 
the paper and
 inspect all the check images (options starting with @option{--check}) to
 see the effect of each parameter.
 
-@ref{General program usage tutorial} is also a good place to get a feeling
-of the modular principle behind Gnuastro's programs and how they are built
-to complement, and build upon, each other. The tutorial culminates in using
-NoiseChisel to detect galaxies and use its outputs to find the galaxy
-colors. Defining colors is a very common process in most
-science-cases. Therefore it is also recommended to (patiently) complete
-that tutorial for optimal usage of NoiseChisel in conjunction with all the
-other Gnuastro programs.
+We strongly recommend going over the two tutorials of @ref{General program
+usage tutorial} and @ref{Detecting large extended targets}. They are
+designed to show how to most effectively use NoiseChisel for the detection
+of small faint objects and large extended objects. In the meantime, they
+will show you the modular principle behind Gnuastro's programs and how they
+are built to complement, and build upon, each other. @ref{General program
+usage tutorial} culminates in using NoiseChisel to detect galaxies and use
+its outputs to find the galaxy colors. Defining colors is a very common
+process in most science-cases. Therefore it is also recommended to
+(patiently) complete that tutorial for optimal usage of NoiseChisel in
+conjunction with all the other Gnuastro programs. @ref{Detecting large
+extended targets} shows you can optimize NoiseChisel's settings for very
+extended objects to successfully carve out to signal-to-noise ratio levels
+of below 1/10.
 
 In @ref{NoiseChisel changes after publication}, we'll review the changes in
 NoiseChisel since the publication of @url{https://arxiv.org/abs/1505.01664,
@@ -13873,7 +14564,7 @@ detection, and output options in @ref{NoiseChisel 
input}, @ref{Detection
 options}, and @ref{NoiseChisel output}.
 
 @menu
-* NoiseChisel changes after publication::  Changes to the software after 
publication.
+* NoiseChisel changes after publication::  NoiseChisel updates after paper's 
publication.
 * Invoking astnoisechisel::     Options and arguments for NoiseChisel.
 @end menu
 
@@ -13890,19 +14581,21 @@ showing every step on multiple mock and real examples.
 However, the paper cannot be updated anymore, but NoiseChisel has evolved
 (and will continue to do so): better algorithms or steps have been found,
 thus options will be added or removed. This book is thus the final and
-definitive guide to NoiseChisel. For a more detailed list of changes in
-each release, please follow the @file{NEWS} file. The @file{NEWS} file is
-present in the released Gnuastro tarball (see @ref{Release tarball}). It is
-also included as a post-script to Gnuastro's announcements, see
-@ref{Announcements}.
-
-The most important change since the publication of that papaer is that
-NoiseChisel is now focused on detection: in spirit with Gnuastro's modular
-design (see @ref{Program design philosophy}), segmentation of the detected
-signal has been spinned-off to Gnuastro's Segment program (see
-@ref{Segment}). Below you can see the major changes since that paper was
-published. First, the removed options/features are discussed, then we
-review the new features that have been added.
+definitive guide to NoiseChisel. The aim of this section is to make the
+transition from the paper to the installed version on your system, as
+smooth as possible with the list below. For a more detailed list of changes
+in previous Gnuastro releases/versions, please see the @file{NEWS}
+file@footnote{The @file{NEWS} file is present in the released Gnuastro
+tarball, see @ref{Release tarball}.}.
+
+The most important change since the publication of that paper is that from
+Gnuastro 0.6, NoiseChisel is only in charge on detection. Segmentation of
+the detected signal was spun-off into a separate program:
+@ref{Segment}. This spin-off allows much greater creativity and is in the
+spirit of Gnuastro's modular design (see @ref{Program design philosophy}).
+Below you can see the major changes since that paper was published. First,
+the removed options/features are discussed, then we review the new features
+that have been added.
 
 @noindent
 Removed features/options:
@@ -13988,6 +14681,7 @@ of false detections, see the descriptions under this 
option in
 
 @end itemize
 
+
 @node Invoking astnoisechisel,  , NoiseChisel changes after publication, 
NoiseChisel
 @subsection Invoking NoiseChisel
 
@@ -14833,9 +15527,9 @@ define segmentation only on signal: to separate and 
find sub-structure
 within the detections.
 
 @cindex Connected component labeling
-If the targets are clearly separated in the dataset (image), a simple
-@emph{connected
-components}@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}}
+If the targets are clearly separated, or their detected regions aren't
+touching, a simple connected
+components@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}}
 algorithm (very basic segmentation) is enough to separate the regions that
 are touching/connected. This is such a basic and simple form of
 segmentation that Gnuastro's Arithmetic program has an operator for it: see
@@ -14850,7 +15544,7 @@ $ astarithmetic binary.fits 2 connected-components
 @noindent
 You can even do a very basic detection (a threshold, say at value
 @code{100}) @emph{and} segmentation in Arithmetic with a single command
-like below to apply:
+like below:
 
 @example
 $ astarithmetic in.fits 100 gt 2 connected-components
@@ -14859,16 +15553,18 @@ $ astarithmetic in.fits 100 gt 2 connected-components
 However, in most astronomical situations our targets are not nicely
 separated or have a sharp boundary/edge (for a threshold to suffice): they
 touch (for example merging galaxies), or are simply in the same
-line-of-sight, causing their images to overlap. In particular, when you do
-your detection with NoiseChisel, you will detect signal to very low surface
-brightness limits: deep into the faint wings of galaxies or bright stars
-(which can extend very far and irregularly from their center). Therefore,
-it often happens that several galaxies are detected as one large
-detection. To continue your scientific analysis, a simple connected
-components algorithm will not suffice. It is therefore necessary to do a
-more sophisticated segmentation and break up the detected pixels (even
-those that are touching) into multiple target objects as accurately as
-possible.
+line-of-sight (which is much more common). This causes their images to
+overlap.
+
+In particular, when you do your detection with NoiseChisel, you will detect
+signal to very low surface brightness limits: deep into the faint wings of
+galaxies or bright stars (which can extend very far and irregularly from
+their center). Therefore, it often happens that several galaxies are
+detected as one large detection. Since they are touching, a simple
+connected components algorithm will not suffice. It is therefore necessary
+to do a more sophisticated segmentation and break up the detected pixels
+(even those that are touching) into multiple target objects as accurately
+as possible.
 
 Segment will use a detection map and its corresponding dataset to find
 sub-structure over the detected areas and use them for its
@@ -14878,44 +15574,128 @@ place to start reading about Segment and 
understanding what it does (with
 many illustrative figures) is Section 3.2 of
 @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
 
+@cindex river
+@cindex Watershed
 As a summary, Segment first finds true @emph{clump}s over the
 detections. Clumps are associated with local maxima/minima@footnote{By
 default the maximum is used as the first clump pixel, to define clumps
-based on local minima, use the @option{--minima} option.})  and extend over
+based on local minima, use the @option{--minima} option.} and extend over
 the neighboring pixels until they reach a local minimum/maximum
-(@emph{river}). Segment will use the distribution of clump signal-to-noise
-ratios over the undetected regions as reference to find ``true'' clumps
-over the detections. Using the undetected regions can be disabled by
-directly giving a signal-to-noise ratio to @option{--clumpsnthresh}.
+(@emph{river}/@emph{watershed}). By default, Segment will use the
+distribution of clump signal-to-noise ratios over the undetected regions as
+reference to find ``true'' clumps over the detections. Using the undetected
+regions can be disabled by directly giving a signal-to-noise ratio to
+@option{--clumpsnthresh}.
 
 The true clumps are then grown to a certain threshold over the
-detections. Based on the strength of the connections between the grown
-clumps, they are considered parts of one @emph{object} or as separate
-@emph{object}s. See Section 3.2 of Akhlaghi and Ichikawa [2015] (link
-above) for more. Segment's main output are thus two labeled datasets: 1)
-clumps, and 2) objects. See @ref{Segment output} for more.
+detections. Based on the strength of the connections (rivers/watersheds)
+between the grown clumps, they are considered parts of one @emph{object} or
+as separate @emph{object}s. See Section 3.2 of Akhlaghi and Ichikawa [2015]
+(link above) for more. Segment's main output are thus two labeled datasets:
+1) clumps, and 2) objects. See @ref{Segment output} for more.
 
 To start learning about Segment, especially in relation to detection
 (@ref{NoiseChisel}) and measurement (@ref{MakeCatalog}), the recommended
 references are @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-[2015]} and @url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}. Just
-have in mind that Segment became a separate program in 2018 (after those
-papers). Therefore we don't currently need to extend the introduction any
-further and can directly dive into the invocation details in @ref{Invoking
-astsegment}.
+[2015]} and @url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}.
 
-Those papers cannot be updated any more but the software will
-evolve. Therefore this book is the definitive reference. Major changes from
-those papers will be documented here when the software starts to diverge
-from them the future, similar what we have already done for NoiseChisel in
-@ref{NoiseChisel changes after publication}.
+Those papers cannot be updated any more but the software will evolve. For
+example Segment became a separate program (from NoiseChisel) in 2018 (after
+those papers were published). Therefore this book is the definitive
+reference. To help in the transition from those papers to the software you
+are using, see @ref{Segment changes after publication}. Finally, in
+@ref{Invoking astsegment}, we'll discuss Segment's inputs, outputs and
+configuration options.
 
 
 @menu
+* Segment changes after publication::  Segment updates after paper's 
publication.
 * Invoking astsegment::         Inputs, outputs and options to Segment
 @end menu
 
-@node Invoking astsegment,  , Segment, Segment
+@node Segment changes after publication, Invoking astsegment, Segment, Segment
+@subsection Segment changes after publication
+
+Segment's main algorithm and working strategy was initially defined and
+introduced in Section 3.2 of @url{https://arxiv.org/abs/1505.01664,
+Akhlaghi and Ichikawa [2015]}. At that time it was part of
+@ref{NoiseChisel}, NoiseChisel's detection program@footnote{Until Gnuastro
+version 0.6 (May 2018), NoiseChisel was in charge of detection @emph{and}
+segmentation. For increased creativity and modularity, NoiseChisel's
+segmentation features were spun-off into separate program (Segment).}. It
+is strongly recommended to read this paper for a good understanding of what
+Segment does and how each parameter influences the output. To help in
+understanding how Segment works, that paper has a large number of figures
+showing every step on multiple mock and real examples.
+
+However, the paper cannot be updated anymore, but Segment has evolved (and
+will continue to do so): better algorithms or steps have been (and will be)
+found. This book is thus the final and definitive guide to Segment. The aim
+of this section is to make the transition from the paper to your installed
+version, as smooth as possible through the list below. For a more detailed
+list of changes in previous Gnuastro releases/versions, please follow the
+@file{NEWS} file@footnote{The @file{NEWS} file is present in the released
+Gnuastro tarball, see @ref{Release tarball}}.
+
+@itemize
+
+@item
+Since the spin-off from NoiseChisel, the default kernel to smooth the input
+for convolution has a FWHM of 1.5 pixels (still a Gaussian). This is
+slightly less than NoiseChisel's default kernel (which has a FWHM of 2
+pixels). This enables the better detection of sharp clumps: as the kernel
+gets wider, the lower signal-to-noise (but sharp/small) clumps will be
+washed away into the noise. You can use MakeProfiles to build your own
+kernel if this is too sharp/wide for your purpose, see the
+@option{--kernel} option in @ref{Segment input}.
+
+The ability to use a different convolution kernel for detection and
+segmentation is one example of how separating detection from segmentation
+into separate programs can increase creativity. In detection, you want to
+detect the diffuse and extended emission, but in segmentation, you want to
+detect sharp peaks.
+
+@item
+The criteria to select true from false clumps is the peak signal-to-noise
+ratio. This value is calculated from a clump's peak value (@mymath{C_c})
+and the highest valued river pixel around that clump (@mymath{R_c}). Both
+are calculated on the convolved image (signified by the @mymath{c}
+subscript). To avoid absolute differences, it is then divided by the input
+Sky standard deviation under that clump @mymath{\sigma} as shown below.
+
+@dispmath{C_c-R_c\over \sigma}
+
+The input Sky standard deviation dataset (@option{--std}) is assumed to be
+for the unconvolved image. Therefore a constant factor (related to the
+convolution kernel) is necessary to convert this into an absolute peak
+signal-to-noise ratio@footnote{You can mask all detections on the convolved
+image with @ref{Arithmetic}, then calculate the standard deviation of the
+(masked) convolved with the @option{--sky} option of @ref{Statistics} and
+compare values on the same tile with NoiseChisel's output.}. But as far as
+Segment is concerned, this absolute value is irrelevant: because it uses
+the ambient noise (undetected regions) to find the numerical threshold of
+this fraction and applies that over the detected regions.
+
+The convolved image has much less scatter, and the peak (maximum when
+@option{--minima} is not called) value of a distribution is strongly
+affected by scatter. Therefore the @mymath{C_c-R_c} is a more reliable
+(with less scatter) measure to identify signal than @mymath{C-R} (on the
+un-convolved image).
+
+Initially, the total clump signal-to-noise ratio of each clump was used,
+see Section 3.2.1 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and
+Ichikawa [2015]}. Therefore its completeness decreased dramatically when
+clumps were present on gradients. In tests, this measure proved to be more
+successful in detecting clumps on gradients and on flatter regions.
+
+@item
+With the new @option{--minima} option, it is now possible to detect inverse
+clumps (for example absorption features), where the clump building should
+begin from its smallest value.
+@end itemize
+
+
+@node Invoking astsegment,  , Segment changes after publication, Segment
 @subsection Invoking Segment
 
 Segment will identify substructure within the detected regions of an input
@@ -15102,7 +15882,7 @@ detection map is given, the extension can be specified 
with
 HDU/extension is in the main input argument (input file specified with no
 option).
 
-Segmentation (clumps or objects) will only take place over the non-zero
+The final segmentation (clumps or objects) will only be over the non-zero
 pixels of this detection map. The dataset must have the same size as the
 input image. Only datasets with an integer type are acceptable for the
 labeled image, see @ref{Numeric data types}. If your detection map only has
@@ -15151,6 +15931,18 @@ The convolved image to avoid internal convolution by 
Segment. The usage of
 this option is identical to NoiseChisel's @option{--convolved} option
 (@ref{NoiseChisel input}). Please see the descriptions there for more.
 
+If you want to use the same convolution kernel for detection (with
+@ref{NoiseChisel}) and segmentation, you can use the same convolved image
+with this option (that is also available in NoiseChisel). However, just be
+careful to use the input to NoiseChisel as the input to Segment also, then
+use the @option{--sky} and @option{--std} to specify the Sky and its
+standard deviation (from NoiseChisel's output). Recall that when
+NoiseChisel is not called with @option{--rawoutput}, the first extention of
+NoiseChisel's output is the @emph{Sky-subtracted} input (see
+@ref{NoiseChisel output}. So if you use the same convolved image that you
+fed to NoiseChisel, but use NoiseChisel's output with Segment's
+@option{--convolved}, then the convolved image won't be Sky subtracted.
+
 @item --chdu
 The HDU/extension containing the convolved image (given to
 @option{--convolved}). For acceptable values, please see the description of
@@ -17420,12 +18212,12 @@ dimensions we are dealing with.
 
 
 @node PSF, Stars, Defining an ellipse and ellipsoid, Modeling basics
-@subsubsection Point Spread Function
+@subsubsection Point spread function
 
 @cindex PSF
 @cindex Point source
 @cindex Diffraction limited
-@cindex Point Spread Function
+@cindex Point spread function
 @cindex Spread of a point source
 Assume we have a `point' source, or a source that is far smaller
 than the maximum resolution (a pixel). When we take an image of it, it
diff --git a/lib/Makefile.am b/lib/Makefile.am
index 4445f97..82e556a 100644
--- a/lib/Makefile.am
+++ b/lib/Makefile.am
@@ -106,7 +106,6 @@ EXTRA_DIST = gnuastro.pc.in $(headersdir)/README 
$(internaldir)/README  \
   $(internaldir)/arithmetic-or.h $(internaldir)/arithmetic-plus.h       \
   $(internaldir)/checkset.h $(internaldir)/commonopts.h                 \
   $(internaldir)/config.h.in $(internaldir)/fixedstringmacros.h         \
-  $(internaldir)/kernel-2d.h $(internaldir)/kernel-3d.h                 \
   $(internaldir)/options.h $(internaldir)/tableintern.h                 \
   $(internaldir)/timing.h
 
diff --git a/lib/statistics.c b/lib/statistics.c
index b476f7e..a29303e 100644
--- a/lib/statistics.c
+++ b/lib/statistics.c
@@ -345,6 +345,7 @@ gal_data_t *
 gal_statistics_quantile(gal_data_t *input, double quantile, int inplace)
 {
   void *blank;
+  int increasing;
   size_t dsize=1, index;
   gal_data_t *nbs=gal_statistics_no_blank_sorted(input, inplace);
   gal_data_t *out=gal_data_alloc(NULL, nbs->type, 1, &dsize,
@@ -353,8 +354,16 @@ gal_statistics_quantile(gal_data_t *input, double 
quantile, int inplace)
   /* Only continue processing if there are non-blank elements. */
   if(nbs->size)
     {
-      /* Find the index of the quantile. */
-      index=gal_statistics_quantile_index(nbs->size, quantile);
+      /* Set the increasing value. */
+      increasing = nbs->flag & GAL_DATA_FLAG_SORTED_I;
+
+      /* Find the index of the quantile, note that if it sorted in
+         decreasing order, then we'll need to get the index of the inverse
+         quantile. */
+      index=gal_statistics_quantile_index(nbs->size,
+                                          ( increasing
+                                            ? quantile
+                                            : (1.0f - quantile) ) );
 
       /* Write the value at this index into the output. */
       if(index==GAL_BLANK_SIZE_T)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]