cut or similar utility

Linux for blind general discussion blinux-list at
Thu Feb 25 03:04:26 UTC 2021

Tim here.  Awk will get you where you want to go.  The "NF" variable
contains the number of fields on any given row, so if you just want
the number of fields in the first row:

  $ awk '{print NF; exit}' file.txt

However, if your rows can have differing numbers of fields, it gets a
bit more complex.  Do you want the number of fields in the longest

  $ awk 'NF>m{m=NF}END{print m}' file.txt

The shortest one?

  $ awk 'NF<m || NR==1{m=NF}END{print m}' file.txt

The stats on each of them?

  $ awk '{++a[NF]}END{for (k in a) print a[k], k}' file.txt | sort -n

(that's the count followed by the number of fields, sorted by
increasing frequency)

By default awk considers things "fields" if they're separated by one
or more spaces or tabs, but you can change the delimiter by passing
the -F option, so if your file is delimited by colons, you could use:

  $ awk -F":" '{print NF; exit}' /etc/passwd

Hopefully that gives you some material to start with.  I'm a bit of
an awk junkie, so if one of those doesn't work for you, let me know
and I can tweak it pretty readily.


On February 24, 2021, Linux for blind general discussion wrote:
> is cut or a similar utility once passed a file able to analyze the
> file and return the number of fields in that file it could find?
> _______________________________________________
> Blinux-list mailing list
> Blinux-list at

More information about the Blinux-list mailing list