0%

Windows下的客户端软件 cmder

Cmderconemumsysgitclink打包在一起,解压即可使用无需配置。可以在 官网 下载。

下载的时候,有两个版本,分别是minifull版;唯一的差别在于有没有内建msysgit工具,这是Git for Windows的标配。我们的Linux子系统中工具齐全,所以下载mini版即可。

cmder 添加到右键菜单

cmder 加到环境变量,然后打开一个cmder命令行窗口,ctrl+T,勾选 Run as administrator,点击Start就打开了一个管理员权限的终端,在新终端中输入以下命令,就可以使用右键打开cmder窗口了。

1
Cmder.exe /REGISTER ALL

设置启动 cmder 时直接运行 bash

打开一个cmder窗口,

1
点击右下角的目录按钮——>Settings——>Startup——>Command line,输入“bash -cur_console:p

lspci 显示当前设备的PCI总线信息

.. note::
东风夜放花千树。更吹落、星如雨
辛弃疾 - 青玉案·元夕

lspci命令用于显示PCI总线的信息,以及所有已连接的PCI设备信息。

官方定义为:

lspci - list all PCI devices

默认情况下,lspci会显示一个简短的设备列表。 使用使用一些参数来显示更详细的输出或供其他程序解析的输出。

不过需要注意的是,在许多操作系统上,对 PCI 配置空间的某些部分的访问仅限于 root,因此普通用户可用的 lspci 功能受到限制。

使用方法为:

1
$ lspci [options]

其中常用的三个选项为:

  • -n 以数字方式显示PCI厂商和设备代码
  • -t 以树状结构显示PCI设备的层次关系
  • -v 显示更详细的输出信息

显示当前主机的所有PCI总线信息:

默认无参数的显示

1
2
3
4
5
6
7
8
9
10
11
$ lspci
00:00.0 Host bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 (rev 02)
00:01.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 (rev 02)
00:02.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 (rev 02)
00:03.0 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:03.2 PCI bridge: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 (rev 02)
00:04.0 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 0 (rev 02)
00:04.1 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 1 (rev 02)
00:04.2 System peripheral: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 2 (rev 02)
......

以数字方式显示PCI厂商和设备代码

以数字形式显示

1
2
3
4
5
6
7
8
9
10
$ lspci -n
00:00.0 0600: 8086:2f00 (rev 02)
00:01.0 0604: 8086:2f02 (rev 02)
00:02.0 0604: 8086:2f04 (rev 02)
00:03.0 0604: 8086:2f08 (rev 02)
00:03.2 0604: 8086:2f0a (rev 02)
00:04.0 0880: 8086:2f20 (rev 02)
00:04.1 0880: 8086:2f21 (rev 02)
00:04.2 0880: 8086:2f22 (rev 02)
......

同时显示数字方式还有设备代码信息

1
2
3
4
5
6
7
8
9
10
$ lspci -nn
00:00.0 Host bridge [0600]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMI2 [8086:2f00] (rev 02)
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 1 [8086:2f02] (rev 02)
00:02.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 2 [8086:2f04] (rev 02)
00:03.0 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f08] (rev 02)
00:03.2 PCI bridge [0604]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 PCI Express Root Port 3 [8086:2f0a] (rev 02)
00:04.0 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 0 [8086:2f20] (rev 02)
00:04.1 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 1 [8086:2f21] (rev 02)
00:04.2 System peripheral [0880]: Intel Corporation Xeon E7 v3/Xeon E5 v3/Core i7 DMA Channel 2 [8086:2f22] (rev 02)
......

以树状结构显示PCI设备的层次关系:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ lspci -t
lspci -t
-+-[0000:ff]-+-08.0
| +-08.2
| +-1f.0
| \-1f.2
+-[0000:80]-+-01.0-[81]----00.0
| +-04.0
| +-05.1
| +-05.2
| \-05.4
+-[0000:7f]-+-08.0
| +-08.2
| +-0c.1
\+-0c.2

真假转换之间 tr

.. note::
假作真时真亦假,无为有处有还无。
曹雪芹《红楼梦》

Linux tr 命令用于转换或删除字符。

tr 命令可以从标准输入读取数据,经过字符串转译后,将结果输出到标准输出。

官方定义为:

tr - translate or delete characters

使用方法为:

1
$ tr [OPTION]... SET1 [SET2]

其中常用的三个选项为:

  • -d, --delete:删除指令字符
  • [:lower:] :所有小写字母
  • [:upper:] :所有大写字母
  • [:blank:] :所有空格

a-z小写全部转换为大写

默认无参数的显示

1
2
3
4
5
6
$ echo "Hello World, Welcome to Linux!" | tr a-z A-Z
HELLO WORLD, WELCOME TO LINUX!

# 还有一种方法
$ echo "Hello World, Welcome to Linux!" | tr [:lower:] [:upper:]
HELLO WORLD!

A-Z大写全部转换为小写

默认无参数的显示

1
2
3
4
5
6
$ echo "Hello World, Welcome to Linux!" | tr  A-Z a-z
hello world, welcome to linux!

# 还有一种方法
$ echo "Hello World, Welcome to Linux!" | tr [:upper:] [:lower:]
hello world, welcome to linux!

貌似起名可以用这个

很多变量或者函数起名字都会移除元音字符,可以考虑使用-d参数,如下:

1
2
$ echo "Hello World, Welcome to Linux!" | tr -d a,o,e,i
Hll Wrld Wlcm t Lnux!

不过感觉删除的多了,也不一定是好事。。。

比如里外看Wlcm不晓得啥意思

移除文件中的所有空格

同理,使用-d,结合[:blank:]可以快速删除所有空格。

1
2
$ echo "Hello World, Welcome to Linux!" | tr -d [:blank:]
HelloWorld,WelcometoLinux!

快速确定CentOS/RHEL的系统版本

你是否清楚的知道目前你使用的CentOS/RHEL的系统版本呢?

或许你认为系统版本对你而言不是很重要,不过如果涉及到bug修改,驱动支持,软件配置的问题,你就需要很清楚的知道到底属于哪个发行版,内核版本是多少了。

对于系统管理员这个问题可能比较简单,如果你是个小白,给你提供几个方法来快速确定吧。

uname命令

1
2
3
4
$ uname -or
3.10.0-693.17.1.el7.x86_64 GNU/Linux
$ uname -a
Linux local 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

$uname$主要用于打印系统的信息,其中$-a$表示打印所有信息,$-or$表示打印操作系统和内核版本信息。

RPM命令

$RPM$为$Red\ Hat\ Package\ Manager$的缩写,是类Redhat系统普遍使用的软件包管理程序,我们可以使用它来确定CentOS/RHEL的发行版本。

1
2
$rpm --query centos-release/redhat-release
centos-release-7-4.1708.el7.centos.x86_64

hostnamectl命令

1
2
3
4
5
6
7
8
9
10
$ hostnamectl
Static hostname: local
Icon name: computer-server
Chassis: server
Machine ID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Boot ID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-693.17.1.el7.x86_64
Architecture: x86-64

lsb_release命令

lsb_release命令显示一些$LSB$ (Linux Standard Base)和发行信息。

如果这个命令找不到,可能需要安装一下yum install redhat-lsb

1
2
$ lsb_release -d
Description: CentOS Linux release 7.4.1708 (Core)

通过查看系统文件

上面的一些命令都是通过检索系统的一些信息来得到,我们也可以通过系统本身的文件直接获取,如下所示:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ cat /etc/centos-release
CentOS Linux release 7.4.1708 (Core)

$ cat /etc/system-release
CentOS Linux release 7.4.1708 (Core)

$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

[TOC]

NVIDIA

Nvidia显卡驱动安装

CentOS7/RHEL

安装依赖包

1
2
3
yum -y update
yum -y groupinstall "GNOME Desktop" "Development Tools"
yum -y install kernel-devel

下载最新的NVIDIA驱动,==> http://www.nvidia.com/object/unix.html ==> Latest Long Lived Branch version

增加自动内核编辑的选项

1
2
yum -y install epel-release
yum -y install dkms

重新启动系统确保系统使用的为最新的内核版本。

编辑 /etc/default/grub在 “GRUB_CMDLINE_LINUX”增加rd.driver.blacklist=nouveau nouveau.modeset=0

更新包含上述更改的grub文件

1
grub2-mkconfig -o /boot/grub2/grub.cfg

编辑或创建文件`/etc/modprobe.d/blacklist.conf并增加内容blacklist nouveau

备份旧的initramfs文件并创建一个新的

1
2
3
4
mv /boot/initramfs-(uname -r).img /boot/initramfs-(uname -r)-nouveau.img

dracut /boot/initramfs-$(uname -r).img $(uname -r)

重启机器,切换到文本模式,systemctl isolate multi-user.target,运行sh NVIDIA-Linux-x86_64-*.run,选项全部选yes即可。

CUDA Toolkit安装

Redhat/CentOS

下载最新的CUDA Toolkit文件 (run文件,不要下载rpm)
==> https://developer.nvidia.com/cuda-downloads ==> Linux ==> x86_64 ==> RHEL/CentOS ==> 7 ==> runfile (local)

sh cuda_*.run
在安装NVIDIA driver的时候选no,因为我们已经在前面安装了,一般CUDA内置的会旧一些。其他选项默认即可。

添加环境变量:

1
2
3
$ export PATH=/usr/local/cuda-9.2/bin:$PATH
$ export LD_LIBRARY_PATH=/usr/local/cuda-9.2/lib64:$LD_LIBRARY_PATH

过程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
sudo ./cuda_9.2.88_396.26_linux.run 
Logging to /tmp/cuda_install_6554.log
Using more to view the EULA.
End User License Agreement
--------------------------


Preface
-------

The Software License Agreement in Chapter 1 and the Supplement
in Chapter 2 contain license terms and conditions that govern
the use of NVIDIA software. By accepting this agreement, you
agree to comply with all the terms and conditions applicable
to the product(s) included herein.


NVIDIA Driver


Description

This package contains the operating system driver and
fundamental system software components for NVIDIA GPUs.


NVIDIA CUDA Toolkit


Description

The NVIDIA CUDA Toolkit provides command-line and graphical
tools for building, debugging and optimizing the performance
of applications accelerated by NVIDIA GPUs, runtime and math
libraries, and documentation including programming guides,
user manuals, and API references.


Default Install Location of CUDA Toolkit

Windows platform:

%ProgramFiles%\NVIDIA GPU Computing Toolkit\CUDA\v#.#

Linux platform:

/usr/local/cuda-#.#

Mac platform:

/Developer/NVIDIA/CUDA-#.#


NVIDIA CUDA Samples


Description

This package includes over 100+ CUDA examples that demonstrate
various CUDA programming principles, and efficient CUDA
implementation of algorithms in specific application domains.


Default Install Location of CUDA Samples

Windows platform:

%ProgramData%\NVIDIA Corporation\CUDA Samples\v#.#

Linux platform:

/usr/local/cuda-#.#/samples

and

$HOME/NVIDIA_CUDA-#.#_Samples

Mac platform:

/Developer/NVIDIA/CUDA-#.#/samples


NVIDIA Nsight Visual Studio Edition (Windows only)


Description

NVIDIA Nsight Development Platform, Visual Studio Edition is a
development environment integrated into Microsoft Visual
Studio that provides tools for debugging, profiling, analyzing
and optimizing your GPU computing and graphics applications.


Default Install Location of Nsight Visual Studio Edition

Windows platform:

%ProgramFiles(x86)%\NVIDIA Corporation\Nsight Visual Studio Edition #.#


1. NVIDIA Software License Agreement
------------------------------------


Release Date: October 20, 2016
------------------------------


IMPORTANT NOTICE -- READ BEFORE DOWNLOADING, INSTALLING,
COPYING OR USING THE LICENSED SOFTWARE:
--------------------------------------------------------

This Software License Agreement ("SLA”), made and entered
into as of the time and date of click through action
(“Effective Date”), is a legal agreement between you and
NVIDIA Corporation ("NVIDIA") and governs the use of the
NVIDIA computer software and the documentation made available
for use with such NVIDIA software. By downloading, installing,
copying, or otherwise using the NVIDIA software and/or
documentation, you agree to be bound by the terms of this SLA.
If you do not agree to the terms of this SLA, do not download,
install, copy or use the NVIDIA software or documentation. IF
YOU ARE ENTERING INTO THIS SLA ON BEHALF OF A COMPANY OR OTHER
LEGAL ENTITY, YOU REPRESENT THAT YOU HAVE THE LEGAL AUTHORITY
TO BIND THE ENTITY TO THIS SLA, IN WHICH CASE “YOU” WILL
MEAN THE ENTITY YOU REPRESENT. IF YOU DON’T HAVE SUCH
AUTHORITY, OR IF YOU DON’T ACCEPT ALL THE TERMS AND
CONDITIONS OF THIS SLA, THEN NVIDIA DOES NOT AGREE TO LICENSE
THE LICENSED SOFTWARE TO YOU, AND YOU MAY NOT DOWNLOAD,
INSTALL, COPY OR USE IT.


1.1. License


1.1.1. License Grant

Subject to the terms of the AGREEMENT, NVIDIA hereby grants
you a non-exclusive, non-transferable license, without the
right to sublicense (except as expressly set forth in a
Supplement), during the applicable license term unless earlier
terminated as provided below, to have Authorized Users install
and use the Software, including modifications (if expressly
permitted in a Supplement), in accordance with the
Documentation. You are only licensed to activate and use
Licensed Software for which you a have a valid license, even
if during the download or installation you are presented with
other product options. No Orders are binding on NVIDIA until
accepted by NVIDIA. Your Orders are subject to the AGREEMENT.

SLA Supplements

Certain Licensed Software licensed under this SLA may be
subject to additional terms and conditions that will be
presented to you in a Supplement for acceptance prior to the
delivery of such Licensed Software under this SLA and the
applicable Supplement. Licensed Software will only be
delivered to you upon your acceptance of all applicable terms.


1.1.2. Limited Purposes Licenses

If your license is provided for one of the purposes indicated
below, then notwithstanding contrary terms in Section 1.1 or
in a Supplement, such licenses are for internal use and do not
include any right or license to sub-license and distribute the
Licensed Software or its output in any way in any public
release, however limited, and/or in any manner that provides
third parties with use of or access to the Licensed Software
or its functionality or output, including (but not limited to)
external alpha or beta testing or development phases. Further:

1.

Evaluation License: You may use evaluation licenses solely
for your internal evaluation of the Licensed Software for
broader adoption within your Enterprise or in connection
with a NVIDIA product purchase decision, and such licenses
have an expiration date as indicated by NVIDIA in its sole
discretion (or ninety days from the date of download if no
other duration is indicated).

2.

Educational/Academic License: You may use
educational/academic licenses solely for educational
purposes and all users must be enrolled or employed by an
academic institution. If you do not meet NVIDIA’s
academic program requirements for educational
institutions, you have no rights under this license.

3.

Test/Development License. You may use test/development
licenses solely for your internal development, testing
and/or debugging of your software applications or for
interoperability testing with the Licensed Software, and
such licenses have an expiration date as indicated by
NVIDIA in its sole discretion (or one year from the date
of download if no other duration is indicated). NVIDIA
Confidential Information under the AGREEMENT includes
output from Licensed Software developer tools identified
as “Pro” versions, where the output reveals
functionality or performance data pertinent to NVIDIA
hardware or software products.


1.1.3. Pre-release Licenses

With respect to alpha, beta, preview, and other pre-release
Software and Documentation (“Pre-Release Licensed
Software”) delivered to you under the AGREEMENT you
acknowledge and agree that such Pre-Release Licensed Software
(i) may not be fully functional, may contain errors or design
flaws, and may have reduced or different security, privacy,
accessibility, availability, and reliability standards
relative to commercially provided NVIDIA software and
documentation, and (ii) use of such Pre-Release Licensed
Software may result in unexpected results, loss of data,
project delays or other unpredictable damage or loss.
THEREFORE, PRE-RELEASE LICENSED SOFTWARE IS NOT INTENDED FOR
USE, AND SHOULD NOT BE USED, IN PRODUCTION OR
BUSINESS-CRITICAL SYSTEMS. NVIDIA has no obligation to make
available a commercial version of any Pre-Release Licensed
Software and NVIDIA has the right to abandon development of
Pre-Release Licensed Software at any time without liability.


1.1.4. Enterprise and Contractor Usage

You may allow your Enterprise employees and Contractors to
access and use the Licensed Software pursuant to the terms of
the AGREEMENT solely to perform work on your behalf, provided
further that with respect to Contractors: (i) you obtain a
written agreement from each Contractor which contains terms
and obligations with respect to access to and use of Licensed
Software no less protective of NVIDIA than those set forth in
the AGREEMENT, and (ii) such Contractor’s access and use
expressly excludes any sublicensing or distribution rights for
the Licensed Software. You are responsible for the compliance
with the terms and conditions of the AGREEMENT by your
Enterprise and Contractors. Any act or omission that, if
committed by you, would constitute a breach of the AGREEMENT
shall be deemed to constitute a breach of the AGREEMENT if
committed by your Enterprise or Contractors.


1.1.5. Services

Except as expressly indicated in an Order, NVIDIA is under no
obligation to provide support for the Licensed Software or to
provide any patches, maintenance, updates or upgrades under
the AGREEMENT. Unless patches, maintenance, updates or
upgrades are provided with their separate governing terms and
conditions, they constitute Licensed Software licensed to you
under the AGREEMENT.


1.2. Limitations


1.2.1. License Restrictions

Except as expressly authorized in the AGREEMENT, you agree
that you will not (nor authorize third parties to): (i) copy
and use Software that was licensed to you for use in one or
more NVIDIA hardware products in other unlicensed products
(provided that copies solely for backup purposes are allowed);
(ii) reverse engineer, decompile, disassemble (except to the
extent applicable laws specifically require that such
activities be permitted) or attempt to derive the source code,
underlying ideas, algorithm or structure of Software provided
to you in object code form; (iii) sell, transfer, assign,
distribute, rent, loan, lease, sublicense or otherwise make
available the Licensed Software or its functionality to third
parties (a) as an application services provider or service
bureau, (b) by operating hosted/virtual system environments,
(c) by hosting, time sharing or providing any other type of
services, or (d) otherwise by means of the internet; (iv)
modify, translate or otherwise create any derivative works of
any Licensed Software; (v) remove, alter, cover or obscure any
proprietary notice that appears on or with the Licensed
Software or any copies thereof; (vi) use the Licensed
Software, or allow its use, transfer, transmission or export
in violation of any applicable export control laws, rules or
regulations; (vii) distribute, permit access to, or sublicense
the Licensed Software as a stand-alone product; (viii) bypass,
disable, circumvent or remove any form of copy protection,
encryption, security or digital rights management or
authentication mechanism used by NVIDIA in connection with the
Licensed Software, or use the Licensed Software together with
any authorization code, serial number, or other copy
protection device not supplied by NVIDIA directly or through
an authorized reseller; (ix) use the Licensed Software for the
purpose of developing competing products or technologies or
assisting a third party in such activities; (x) use the
Licensed Software with any system or application where the use
or failure of such system or application can reasonably be
expected to threaten or result in personal injury, death, or
catastrophic loss including, without limitation, use in
connection with any nuclear, avionics, navigation, military,
medical, life support or other life critical application
(“Critical Applications”), unless the parties have entered
into a Critical Applications agreement; (xi) distribute any
modification or derivative work you make to the Licensed
Software under or by reference to the same name as used by
NVIDIA; or (xii) use the Licensed Software in any manner that
would cause the Licensed Software to become subject to an Open
Source License. Nothing in the AGREEMENT shall be construed to
give you a right to use, or otherwise obtain access to, any
source code from which the Software or any portion thereof is
compiled or interpreted. You acknowledge that NVIDIA does not
design, test, manufacture or certify the Licensed Software for
use in the context of a Critical Application and NVIDIA shall
not be liable to you or any third party, in whole or in part,
for any claims or damages arising from such use. You agree to
defend, indemnify and hold harmless NVIDIA and its Affiliates,
and their respective employees, contractors, agents, officers
and directors, from and against any and all claims, damages,
obligations, losses, liabilities, costs or debt, fines,
restitutions and expenses (including but not limited to
attorney’s fees and costs incident to establishing the right
of indemnification) arising out of or related to you and your
Enterprise, and their respective employees, contractors,
agents, distributors, resellers, end users, officers and
directors use of Licensed Software outside of the scope of the
AGREEMENT or any other breach of the terms of the AGREEMENT.


1.2.2. Third Party License Obligations

You acknowledge and agree that the Licensed Software may
include or incorporate third party technology (collectively
“Third Party Components”), which is provided for use in or
with the Software and not otherwise used separately. If the
Licensed Software includes or incorporates Third Party
Components, then the third-party pass-through terms and
conditions (“Third Party Terms”) for the particular Third
Party Component will be bundled with the Software or otherwise
made available online as indicated by NVIDIA and will be
incorporated by reference into the AGREEMENT. In the event of
any conflict between the terms in the AGREEMENT and the Third
Party Terms, the Third Party Terms shall govern. Copyright to
Third Party Components are held by the copyright holders
indicated in the copyright notices indicated in the Third
Party Terms.

Audio/Video Encoders and Decoders

You acknowledge and agree that it is your sole responsibility
to obtain any additional third party licenses required to
make, have made, use, have used, sell, import, and offer for
sale your products or services that include or incorporate any
Third Party Components and content relating to audio and/or
video encoders and decoders from, including but not limited
to, Microsoft, Thomson, Fraunhofer IIS, Sisvel S.p.A.,
MPEG-LA, and Coding Technologies as NVIDIA does not grant to
you under the AGREEMENT any necessary patent or other rights
with respect to audio and/or video encoders and decoders.


1.2.3. Limited Rights

Your rights in the Licensed Software are limited to those
expressly granted under the AGREEMENT and no other licenses
are granted whether by implication, estoppel or otherwise.
NVIDIA reserves all rights, title and interest in and to the
Licensed Software not expressly granted under the AGREEMENT.


1.3. Confidentiality

Neither party will use the other party’s Confidential
Information, except as necessary for the performance of the
AGREEMENT, nor will either party disclose such Confidential
Information to any third party, except to personnel of NVIDIA
and its Affiliates, you, your Enterprise, your Enterprise
Contractors, and each party’s legal and financial advisors
that have a need to know such Confidential Information for the
performance of the AGREEMENT, provided that each such
personnel, employee and Contractor is subject to a written
agreement that includes confidentiality obligations consistent
with those set forth herein. Each party will use all
reasonable efforts to maintain the confidentiality of all of
the other party’s Confidential Information in its possession
or control, but in no event less than the efforts that it
ordinarily uses with respect to its own Confidential
Information of similar nature and importance. The foregoing
obligations will not restrict either party from disclosing the
other party’s Confidential Information or the terms and
conditions of the AGREEMENT as required under applicable
securities regulations or pursuant to the order or requirement
of a court, administrative agency, or other governmental body,
provided that the party required to make such disclosure (i)
gives reasonable notice to the other party to enable it to
contest such order or requirement prior to its disclosure
(whether through protective orders or otherwise), (ii) uses
reasonable effort to obtain confidential treatment or similar
protection to the fullest extent possible to avoid such public
disclosure, and (iii) discloses only the minimum amount of
information necessary to comply with such requirements.


1.4. Ownership

You are not obligated to disclose to NVIDIA any modifications
that you, your Enterprise or your Contractors make to the
Licensed Software as permitted under the AGREEMENT. As between
the parties, all modifications are owned by NVIDIA and
licensed to you under the AGREEMENT unless otherwise expressly
provided in a Supplement. The Licensed Software and all
modifications owned by NVIDIA, and the respective Intellectual
Property Rights therein, are and will remain the sole and
exclusive property of NVIDIA or its licensors, whether the
Licensed Software is separate from or combined with any other
products or materials. You shall not engage in any act or
omission that would impair NVIDIA’s and/or its licensors’
Intellectual Property Rights in the Licensed Software or any
Do you accept the previously read EULA?
accept/decline/quit: accept

Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 396.26?
(y)es/(n)o/(q)uit: y

Do you want to install the OpenGL libraries?
(y)es/(n)o/(q)uit [ default is yes ]:

Do you want to run nvidia-xconfig?
This will update the system X configuration file so that the NVIDIA X driver
is used. The pre-existing X configuration file will be backed up.
This option should not be used on systems that require a custom
X configuration, such as systems with multiple GPU vendors.
(y)es/(n)o/(q)uit [ default is no ]: y

Install the CUDA 9.2 Toolkit?
(y)es/(n)o/(q)uit: y

Enter Toolkit Location
[ default is /usr/local/cuda-9.2 ]:

Do you want to install a symbolic link at /usr/local/cuda?
(y)es/(n)o/(q)uit: y

Install the CUDA 9.2 Samples?
(y)es/(n)o/(q)uit: y

Enter CUDA Samples Location
[ default is /home/leo ]: /home/leo/cuda-example

Installing the NVIDIA display driver...
Installing the CUDA Toolkit in /usr/local/cuda-9.2 ...
Missing recommended library: libGLU.so
Missing recommended library: libXi.so
Missing recommended library: libXmu.so

Installing the CUDA Samples in /home/leo/cuda-example ...
Copying samples to /home/leo/cuda-example/NVIDIA_CUDA-9.2_Samples now...
Finished copying samples.

===========
= Summary =
===========

Driver: Installed
Toolkit: Installed in /usr/local/cuda-9.2
Samples: Installed in /home/leo/cuda-example, but missing recommended libraries

Please make sure that
- PATH includes /usr/local/cuda-9.2/bin
- LD_LIBRARY_PATH includes /usr/local/cuda-9.2/lib64, or, add /usr/local/cuda-9.2/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-9.2/bin
To uninstall the NVIDIA Driver, run nvidia-uninstall

Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-9.2/doc/pdf for detailed information on setting up CUDA.

Logfile is /tmp/cuda_install_6554.log

问题集锦

  1. ERROR: Unable to load the ‘nvidia-drm’ kernel module.

One probable reason is that the system is boot from UEFI but Secure Boot option is turned on in the BIOS setting. Turn it off and the problem will be solved.

  1. Error: You Appear To Be Running An X Server; Please Exit X Before Installing
    1
    2
    3
    4
    5
    6
    7
    8
    9
    1.按住CTRL+ALT+F1 进入命令行

    2.sudo service lightdm stop 或者 sudo stop lightdm

    3.sudo init 3

    4.安装驱动程序:#: sudo ./NVIDIA-Linux-x86_64-177.67-pkg2.run //当前目录下执行NVIDIA驱动程序

    5.按照提示安装完成,简单方法重启就好了 sudo reboot

GNU Automake 版本(version 1.16.1, 26 February 2018)

Permission is granted to copy, distribute and/or modify this
document under the terms of the GNU Free Documentation License,
Version 1.3 or any later version published by the Free Software
Foundation; with no Invariant Sections, with no Front-Cover texts,
and with no Back-Cover Texts. A copy of the license is included in
the section entitled “GNU Free Documentation License.”

27 Frequently Asked Questions about Automake


This chapter covers some questions that often come up on the mailing
lists.

27.1 CVS and generated files

Background: distributed generated Files

Packages made with Autoconf and Automake ship with some generated files
like ‘configure’ or ‘Makefile.in’. These files were generated on the
developer’s machine and are distributed so that end-users do not have to
install the maintainer tools required to rebuild them. Other generated
files like Lex scanners, Yacc parsers, or Info documentation, are
usually distributed on similar grounds.

Automake output rules in ‘Makefile’s to rebuild these files. For
instance, ‘make’ will run ‘autoconf’ to rebuild ‘configure’ whenever
‘configure.ac’ is changed. This makes development safer by ensuring a
‘configure’ is never out-of-date with respect to ‘configure.ac’.

As generated files shipped in packages are up-to-date, and because
‘tar’ preserves times-tamps, these rebuild rules are not triggered when
a user unpacks and builds a package.

Background: CVS and Timestamps

Unless you use CVS keywords (in which case files must be updated at
commit time), CVS preserves timestamp during ‘cvs commit’ and ‘cvs
import -d’ operations.

When you check out a file using ‘cvs checkout’ its timestamp is set
to that of the revision that is being checked out.

However, during ‘cvs update’, files will have the date of the update,
not the original timestamp of this revision. This is meant to make sure
that ‘make’ notices sources files have been updated.

This timestamp shift is troublesome when both sources and generated
files are kept under CVS. Because CVS processes files in lexical order,
‘configure.ac’ will appear newer than ‘configure’ after a ‘cvs update’
that updates both files, even if ‘configure’ was newer than
‘configure.ac’ when it was checked in. Calling ‘make’ will then trigger
a spurious rebuild of ‘configure’.

Living with CVS in Autoconfiscated Projects

There are basically two clans amongst maintainers: those who keep all
distributed files under CVS, including generated files, and those who
keep generated files out of CVS.

All Files in CVS
…………….

• The CVS repository contains all distributed files so you know
exactly what is distributed, and you can checkout any prior version
entirely.

• Maintainers can see how generated files evolve (for instance, you
can see what happens to your ‘Makefile.in’s when you upgrade
Automake and make sure they look OK).

• Users do not need the autotools to build a checkout of the project,
it works just like a released tarball.

• If users use ‘cvs update’ to update their copy, instead of ‘cvs
checkout’ to fetch a fresh one, timestamps will be inaccurate.
Some rebuild rules will be triggered and attempt to run developer
tools such as ‘autoconf’ or ‘automake’.

 Calls to such tools are all wrapped into a call to the ‘missing’
 script discussed later (*note maintainer-mode::), so that the user
 will see more descriptive warnings about missing or out-of-date
 tools, and possible suggestions about how to obtain them, rather
 than just some “command not found” error, or (worse) some obscure
 message from some older version of the required tool they happen to
 have installed.

 Maintainers interested in keeping their package buildable from a
 CVS checkout even for those users that lack maintainer-specific
 tools might want to provide an helper script (or to enhance their
 existing bootstrap script) to fix the timestamps after a ‘cvs
 update’ or a ‘git checkout’, to prevent spurious rebuilds.  In case
 of a project committing the Autotools-generated files, as well as
 the generated ‘.info’ files, such script might look something like
 this:

      #!/bin/sh
      # fix-timestamp.sh: prevents useless rebuilds after "cvs update"
      sleep 1
      # aclocal-generated aclocal.m4 depends on locally-installed
      # '.m4' macro files, as well as on 'configure.ac'
      touch aclocal.m4
      sleep 1
      # autoconf-generated configure depends on aclocal.m4 and on
      # configure.ac
      touch configure
      # so does autoheader-generated config.h.in
      touch config.h.in
      # and all the automake-generated Makefile.in files
      touch `find . -name Makefile.in -print`
      # finally, the makeinfo-generated '.info' files depend on the
      # corresponding '.texi' files
      touch doc/*.info

• In distributed development, developers are likely to have different
version of the maintainer tools installed. In this case rebuilds
triggered by timestamp lossage will lead to spurious changes to
generated files. There are several solutions to this:

    • All developers should use the same versions, so that the
      rebuilt files are identical to files in CVS.  (This starts to
      be difficult when each project you work on uses different
      versions.)
    • Or people use a script to fix the timestamp after a checkout
      (the GCC folks have such a script).
    • Or ‘configure.ac’ uses ‘AM_MAINTAINER_MODE’, which will
      disable all of these rebuild rules by default.  This is
      further discussed in *note maintainer-mode::.

• Although we focused on spurious rebuilds, the converse can also
happen. CVS’s timestamp handling can also let you think an
out-of-date file is up-to-date.

 For instance, suppose a developer has modified ‘Makefile.am’ and
 has rebuilt ‘Makefile.in’, and then decides to do a last-minute
 change to ‘Makefile.am’ right before checking in both files
 (without rebuilding ‘Makefile.in’ to account for the change).

 This last change to ‘Makefile.am’ makes the copy of ‘Makefile.in’
 out-of-date.  Since CVS processes files alphabetically, when
 another developer ‘cvs update’s his or her tree, ‘Makefile.in’ will
 happen to be newer than ‘Makefile.am’.  This other developer will
 not see that ‘Makefile.in’ is out-of-date.

Generated Files out of CVS
……………………..

One way to get CVS and ‘make’ working peacefully is to never store
generated files in CVS, i.e., do not CVS-control files that are
‘Makefile’ targets (also called derived files).

This way developers are not annoyed by changes to generated files.
It does not matter if they all have different versions (assuming they
are compatible, of course). And finally, timestamps are not lost,
changes to sources files can’t be missed as in the
‘Makefile.am’/‘Makefile.in’ example discussed earlier.

The drawback is that the CVS repository is not an exact copy of what
is distributed and that users now need to install various development
tools (maybe even specific versions) before they can build a checkout.
But, after all, CVS’s job is versioning, not distribution.

Allowing developers to use different versions of their tools can also
hide bugs during distributed development. Indeed, developers will be
using (hence testing) their own generated files, instead of the
generated files that will be released actually. The developer who
prepares the tarball might be using a version of the tool that produces
bogus output (for instance a non-portable C file), something other
developers could have noticed if they weren’t using their own versions
of this tool.

Third-party Files

Another class of files not discussed here (because they do not cause
timestamp issues) are files that are shipped with a package, but
maintained elsewhere. For instance, tools like ‘gettextize’ and
‘autopoint’ (from Gettext) or ‘libtoolize’ (from Libtool), will install
or update files in your package.

These files, whether they are kept under CVS or not, raise similar
concerns about version mismatch between developers’ tools. The Gettext
manual has a section about this, see *note CVS Issues: (gettext)CVS
Issues.

27.2 ‘missing’ and ‘AM_MAINTAINER_MODE’

‘missing’

The ‘missing’ script is a wrapper around several maintainer tools,
designed to warn users if a maintainer tool is required but missing.
Typical maintainer tools are ‘autoconf’, ‘automake’, ‘bison’, etc.
Because file generated by these tools are shipped with the other sources
of a package, these tools shouldn’t be required during a user build and
they are not checked for in ‘configure’.

However, if for some reason a rebuild rule is triggered and involves
a missing tool, ‘missing’ will notice it and warn the user, even
suggesting how to obtain such a tool (at least in case it is a
well-known one, like ‘makeinfo’ or ‘bison’). This is more helpful and
user-friendly than just having the rebuild rules spewing out a terse
error message like ‘sh: TOOL: command not found’. Similarly, ‘missing’
will warn the user if it detects that a maintainer tool it attempted to
use seems too old (be warned that diagnosing this correctly is typically
more difficult that detecting missing tools, and requires cooperation
from the tool itself, so it won’t always work).

If the required tool is installed, ‘missing’ will run it and won’t
attempt to continue after failures. This is correct during development:
developers love fixing failures. However, users with missing or too old
maintainer tools may get an error when the rebuild rule is spuriously
triggered, halting the build. This failure to let the build continue is
one of the arguments of the ‘AM_MAINTAINER_MODE’ advocates.

‘AM_MAINTAINER_MODE’

‘AM_MAINTAINER_MODE’ allows you to choose whether the so called “rebuild
rules” should be enabled or disabled. With
‘AM_MAINTAINER_MODE([enable])’, they are enabled by default, otherwise
they are disabled by default. In the latter case, if you have
‘AM_MAINTAINER_MODE’ in ‘configure.ac’, and run ‘./configure && make’,
then ‘make’ will never attempt to rebuild ‘configure’, ‘Makefile.in’s,
Lex or Yacc outputs, etc. I.e., this disables build rules for files
that are usually distributed and that users should normally not have to
update.

The user can override the default setting by passing either
‘–enable-maintainer-mode’ or ‘–disable-maintainer-mode’ to
‘configure’.

People use ‘AM_MAINTAINER_MODE’ either because they do not want their
users (or themselves) annoyed by timestamps lossage (*note CVS::), or
because they simply can’t stand the rebuild rules and prefer running
maintainer tools explicitly.

‘AM_MAINTAINER_MODE’ also allows you to disable some custom build
rules conditionally. Some developers use this feature to disable rules
that need exotic tools that users may not have available.

Several years ago François Pinard pointed out several arguments
against this ‘AM_MAINTAINER_MODE’ macro. Most of them relate to
insecurity. By removing dependencies you get non-dependable builds:
changes to sources files can have no effect on generated files and this
can be very confusing when unnoticed. He adds that security shouldn’t
be reserved to maintainers (what ‘–enable-maintainer-mode’ suggests),
on the contrary. If one user has to modify a ‘Makefile.am’, then either
‘Makefile.in’ should be updated or a warning should be output (this is
what Automake uses ‘missing’ for) but the last thing you want is that
nothing happens and the user doesn’t notice it (this is what happens
when rebuild rules are disabled by ‘AM_MAINTAINER_MODE’).

Jim Meyering, the inventor of the ‘AM_MAINTAINER_MODE’ macro was
swayed by François’s arguments, and got rid of ‘AM_MAINTAINER_MODE’ in
all of his packages.

Still many people continue to use ‘AM_MAINTAINER_MODE’, because it
helps them working on projects where all files are kept under version
control, and because ‘missing’ isn’t enough if you have the wrong
version of the tools.

27.3 Why doesn’t Automake support wildcards?

Developers are lazy. They would often like to use wildcards in
‘Makefile.am’s, so that they would not need to remember to update
‘Makefile.am’s every time they add, delete, or rename a file.

There are several objections to this:
• When using CVS (or similar) developers need to remember they have
to run ‘cvs add’ or ‘cvs rm’ anyway. Updating ‘Makefile.am’
accordingly quickly becomes a reflex.

 Conversely, if your application doesn’t compile because you forgot
 to add a file in ‘Makefile.am’, it will help you remember to ‘cvs
 add’ it.

• Using wildcards makes it easy to distribute files by mistake. For
instance, some code a developer is experimenting with (a test case,
say) that should not be part of the distribution.

• Using wildcards it’s easy to omit some files by mistake. For
instance, one developer creates a new file, uses it in many places,
but forgets to commit it. Another developer then checks out the
incomplete project and is able to run ‘make dist’ successfully,
even though a file is missing. By listing files, ‘make dist’
will complain.

• Wildcards are not portable to some non-GNU ‘make’ implementations,
e.g., NetBSD ‘make’ will not expand globs such as ‘*’ in
prerequisites of a target.

• Finally, it’s really hard to forget to add a file to
‘Makefile.am’: files that are not listed in ‘Makefile.am’ are not
compiled or installed, so you can’t even test them.

Still, these are philosophical objections, and as such you may
disagree, or find enough value in wildcards to dismiss all of them.
Before you start writing a patch against Automake to teach it about
wildcards, let’s see the main technical issue: portability.

Although ‘$(wildcard …)’ works with GNU ‘make’, it is not portable
to other ‘make’ implementations.

The only way Automake could support ‘$(wildcard …)’ is by expanding
‘$(wildcard …)’ when ‘automake’ is run. The resulting ‘Makefile.in’s
would be portable since they would list all files and not use
‘$(wildcard …)’. However that means developers would need to remember
to run ‘automake’ each time they add, delete, or rename files.

Compared to editing ‘Makefile.am’, this is a very small gain. Sure,
it’s easier and faster to type ‘automake; make’ than to type ‘emacs
Makefile.am; make’. But nobody bothered enough to write a patch to add
support for this syntax. Some people use scripts to generate file lists
in ‘Makefile.am’ or in separate ‘Makefile’ fragments.

Even if you don’t care about portability, and are tempted to use
‘$(wildcard …)’ anyway because you target only GNU Make, you should
know there are many places where Automake needs to know exactly which
files should be processed. As Automake doesn’t know how to expand
‘$(wildcard …)’, you cannot use it in these places. ‘$(wildcard …)’
is a black box comparable to ‘AC_SUBST’ed variables as far Automake is
concerned.

You can get warnings about ‘$(wildcard …’) constructs using the
‘-Wportability’ flag.

27.4 Limitations on File Names

Automake attempts to support all kinds of file names, even those that
contain unusual characters or are unusually long. However, some
limitations are imposed by the underlying operating system and tools.

Most operating systems prohibit the use of the null byte in file
names, and reserve ‘/’ as a directory separator. Also, they require
that file names are properly encoded for the user’s locale. Automake is
subject to these limits.

Portable packages should limit themselves to POSIX file names. These
can contain ASCII letters and digits, ‘_’, ‘.’, and ‘-’. File names
consist of components separated by ‘/’. File name components cannot
begin with ‘-’.

Portable POSIX file names cannot contain components that exceed a
14-byte limit, but nowadays it’s normally safe to assume the
more-generous XOPEN limit of 255 bytes. POSIX limits file names to 255
bytes (XOPEN allows 1023 bytes), but you may want to limit a source
tarball to file names of 99 bytes to avoid interoperability problems
with old versions of ‘tar’.

If you depart from these rules (e.g., by using non-ASCII characters
in file names, or by using lengthy file names), your installers may have
problems for reasons unrelated to Automake. However, if this does not
concern you, you should know about the limitations imposed by Automake
itself. These limitations are undesirable, but some of them seem to be
inherent to underlying tools like Autoconf, Make, M4, and the shell.
They fall into three categorie: install directories, build directories,
and file names.

The following characters:

 newline " # $ ' `

should not appear in the names of install directories. For example,
the operand of ‘configure’’s ‘–prefix’ option should not contain these
characters.

Build directories suffer the same limitations as install directories,
and in addition should not contain the following characters:

 & @ \

For example, the full name of the directory containing the source
files should not contain these characters.

Source and installation file names like ‘main.c’ are limited even
further: they should conform to the POSIX/XOPEN rules described above.
In addition, if you plan to port to non-POSIX environments, you should
avoid file names that differ only in case (e.g., ‘makefile’ and
‘Makefile’). Nowadays it is no longer worth worrying about the 8.3
limits of DOS file systems.

27.5 Errors with distclean

This is a diagnostic you might encounter while running ‘make distcheck’.

As explained in *note Checking the Distribution::, ‘make distcheck’
attempts to build and check your package for errors like this one.

‘make distcheck’ will perform a ‘VPATH’ build of your package (*note
VPATH Builds::), and then call ‘make distclean’. Files left in the
build directory after ‘make distclean’ has run are listed after this
error.

This diagnostic really covers two kinds of errors:

• files that are forgotten by distclean;
• distributed files that are erroneously rebuilt.

The former left-over files are not distributed, so the fix is to mark
them for cleaning (*note Clean::), this is obvious and doesn’t deserve
more explanations.

The latter bug is not always easy to understand and fix, so let’s
proceed with an example. Suppose our package contains a program for
which we want to build a man page using ‘help2man’. GNU ‘help2man’
produces simple manual pages from the ‘–help’ and ‘–version’ output of
other commands (*note Overview: (help2man)Top.). Because we don’t want
to force our users to install ‘help2man’, we decide to distribute the
generated man page using the following setup.

 # This Makefile.am is bogus.
 bin_PROGRAMS = foo
 foo_SOURCES = foo.c
 dist_man_MANS = foo.1

 foo.1: foo$(EXEEXT)
         help2man --output=foo.1 ./foo$(EXEEXT)

This will effectively distribute the man page. However, ‘make
distcheck’ will fail with:

 ERROR: files left in build directory after distclean:
 ./foo.1

Why was ‘foo.1’ rebuilt? Because although distributed, ‘foo.1’
depends on a non-distributed built file: ‘foo$(EXEEXT)’. ‘foo$(EXEEXT)’
is built by the user, so it will always appear to be newer than the
distributed ‘foo.1’.

‘make distcheck’ caught an inconsistency in our package. Our intent
was to distribute ‘foo.1’ so users do not need to install ‘help2man’,
however since this rule causes this file to be always rebuilt, users
do need ‘help2man’. Either we should ensure that ‘foo.1’ is not
rebuilt by users, or there is no point in distributing ‘foo.1’.

More generally, the rule is that distributed files should never
depend on non-distributed built files. If you distribute something
generated, distribute its sources.

One way to fix the above example, while still distributing ‘foo.1’ is
to not depend on ‘foo$(EXEEXT)’. For instance, assuming ‘foo –version’
and ‘foo –help’ do not change unless ‘foo.c’ or ‘configure.ac’ change,
we could write the following ‘Makefile.am’:

 bin_PROGRAMS = foo
 foo_SOURCES = foo.c
 dist_man_MANS = foo.1

 foo.1: foo.c $(top_srcdir)/configure.ac
         $(MAKE) $(AM_MAKEFLAGS) foo$(EXEEXT)
         help2man --output=foo.1 ./foo$(EXEEXT)

This way, ‘foo.1’ will not get rebuilt every time ‘foo$(EXEEXT)’
changes. The ‘make’ call makes sure ‘foo$(EXEEXT)’ is up-to-date before
‘help2man’. Another way to ensure this would be to use separate
directories for binaries and man pages, and set ‘SUBDIRS’ so that
binaries are built before man pages.

We could also decide not to distribute ‘foo.1’. In this case it’s
fine to have ‘foo.1’ dependent upon ‘foo$(EXEEXT)’, since both will have
to be rebuilt. However it would be impossible to build the package in a
cross-compilation, because building ‘foo.1’ involves an execution of
‘foo$(EXEEXT)’.

Another context where such errors are common is when distributed
files are built by tools that are built by the package. The pattern is
similar:

 distributed-file: built-tools distributed-sources
         build-command

should be changed to

 distributed-file: distributed-sources
         $(MAKE) $(AM_MAKEFLAGS) built-tools
         build-command

or you could choose not to distribute ‘distributed-file’, if
cross-compilation does not matter.

The points made through these examples are worth a summary:

• Distributed files should never depend upon non-distributed built
files.
• Distributed files should be distributed with all their
dependencies.
• If a file is intended to be rebuilt by users, then there is no
point in distributing it.

For desperate cases, it’s always possible to disable this check by
setting ‘distcleancheck_listfiles’ as documented in *note Checking the
Distribution::. Make sure you do understand the reason why ‘make
distcheck’ complains before you do this. ‘distcleancheck_listfiles’ is
a way to hide errors, not to fix them. You can always do better.

27.6 Flag Variables Ordering

 What is the difference between ‘AM_CFLAGS’, ‘CFLAGS’, and
 ‘mumble_CFLAGS’?

 Why does ‘automake’ output ‘CPPFLAGS’ after
 ‘AM_CPPFLAGS’ on compile lines?  Shouldn’t it be the converse?

 My ‘configure’ adds some warning flags into ‘CXXFLAGS’.  In
 one ‘Makefile.am’ I would like to append a new flag, however if I
 put the flag into ‘AM_CXXFLAGS’ it is prepended to the other
 flags, not appended.

Compile Flag Variables

This section attempts to answer all the above questions. We will mostly
discuss ‘CPPFLAGS’ in our examples, but actually the answer holds for
all the compile flags used in Automake: ‘CCASFLAGS’, ‘CFLAGS’,
‘CPPFLAGS’, ‘CXXFLAGS’, ‘FCFLAGS’, ‘FFLAGS’, ‘GCJFLAGS’, ‘LDFLAGS’,
‘LFLAGS’, ‘LIBTOOLFLAGS’, ‘OBJCFLAGS’, ‘OBJCXXFLAGS’, ‘RFLAGS’,
‘UPCFLAGS’, and ‘YFLAGS’.

‘CPPFLAGS’, ‘AM_CPPFLAGS’, and ‘mumble_CPPFLAGS’ are three variables
that can be used to pass flags to the C preprocessor (actually these
variables are also used for other languages like C++ or preprocessed
Fortran). ‘CPPFLAGS’ is the user variable (*note User Variables::),
‘AM_CPPFLAGS’ is the Automake variable, and ‘mumble_CPPFLAGS’ is the
variable specific to the ‘mumble’ target (we call this a per-target
variable, *note Program and Library Variables::).

Automake always uses two of these variables when compiling C sources
files. When compiling an object file for the ‘mumble’ target, the first
variable will be ‘mumble_CPPFLAGS’ if it is defined, or ‘AM_CPPFLAGS’
otherwise. The second variable is always ‘CPPFLAGS’.

In the following example,

 bin_PROGRAMS = foo bar
 foo_SOURCES = xyz.c
 bar_SOURCES = main.c
 foo_CPPFLAGS = -DFOO
 AM_CPPFLAGS = -DBAZ

‘xyz.o’ will be compiled with ‘$(foo_CPPFLAGS) $(CPPFLAGS)’, (because
‘xyz.o’ is part of the ‘foo’ target), while ‘main.o’ will be compiled
with ‘$(AM_CPPFLAGS) $(CPPFLAGS)’ (because there is no per-target
variable for target ‘bar’).

The difference between ‘mumble_CPPFLAGS’ and ‘AM_CPPFLAGS’ being
clear enough, let’s focus on ‘CPPFLAGS’. ‘CPPFLAGS’ is a user variable,
i.e., a variable that users are entitled to modify in order to compile
the package. This variable, like many others, is documented at the end
of the output of ‘configure –help’.

For instance, someone who needs to add ‘/home/my/usr/include’ to the
C compiler’s search path would configure a package with

 ./configure CPPFLAGS='-I /home/my/usr/include'

and this flag would be propagated to the compile rules of all
‘Makefile’s.

It is also not uncommon to override a user variable at ‘make’-time.
Many installers do this with ‘prefix’, but this can be useful with
compiler flags too. For instance, if, while debugging a C++ project,
you need to disable optimization in one specific object file, you can
run something like

 rm file.o
 make CXXFLAGS=-O0 file.o
 make

The reason ‘$(CPPFLAGS)’ appears after ‘$(AM_CPPFLAGS)’ or
‘$(mumble_CPPFLAGS)’ in the compile command is that users should always
have the last say. It probably makes more sense if you think about it
while looking at the ‘CXXFLAGS=-O0’ above, which should supersede any
other switch from ‘AM_CXXFLAGS’ or ‘mumble_CXXFLAGS’ (and this of course
replaces the previous value of ‘CXXFLAGS’).

You should never redefine a user variable such as ‘CPPFLAGS’ in
‘Makefile.am’. Use ‘automake -Woverride’ to diagnose such mistakes.
Even something like

 CPPFLAGS = -DDATADIR=\"$(datadir)\" @CPPFLAGS@

is erroneous. Although this preserves ‘configure’’s value of
‘CPPFLAGS’, the definition of ‘DATADIR’ will disappear if a user
attempts to override ‘CPPFLAGS’ from the ‘make’ command line.

 AM_CPPFLAGS = -DDATADIR=\"$(datadir)\"

is all that is needed here if no per-target flags are used.

You should not add options to these user variables within ‘configure’
either, for the same reason. Occasionally you need to modify these
variables to perform a test, but you should reset their values
afterwards. In contrast, it is OK to modify the ‘AM_’ variables within
‘configure’ if you ‘AC_SUBST’ them, but it is rather rare that you need
to do this, unless you really want to change the default definitions of
the ‘AM_’ variables in all ‘Makefile’s.

What we recommend is that you define extra flags in separate
variables. For instance, you may write an Autoconf macro that computes
a set of warning options for the C compiler, and ‘AC_SUBST’ them in
‘WARNINGCFLAGS’; you may also have an Autoconf macro that determines
which compiler and which linker flags should be used to link with
library ‘libfoo’, and ‘AC_SUBST’ these in ‘LIBFOOCFLAGS’ and
‘LIBFOOLDFLAGS’. Then, a ‘Makefile.am’ could use these variables as
follows:

 AM_CFLAGS = $(WARNINGCFLAGS)
 bin_PROGRAMS = prog1 prog2
 prog1_SOURCES = ...
 prog2_SOURCES = ...
 prog2_CFLAGS = $(LIBFOOCFLAGS) $(AM_CFLAGS)
 prog2_LDFLAGS = $(LIBFOOLDFLAGS)

In this example both programs will be compiled with the flags
substituted into ‘$(WARNINGCFLAGS)’, and ‘prog2’ will additionally be
compiled with the flags required to link with ‘libfoo’.

Note that listing ‘AM_CFLAGS’ in a per-target ‘CFLAGS’ variable is a
common idiom to ensure that ‘AM_CFLAGS’ applies to every target in a
‘Makefile.in’.

Using variables like this gives you full control over the ordering of
the flags. For instance, if there is a flag in $(WARNINGCFLAGS) that
you want to negate for a particular target, you can use something like
‘prog1_CFLAGS = $(AM_CFLAGS) -no-flag’. If all of these flags had been
forcefully appended to ‘CFLAGS’, there would be no way to disable one
flag. Yet another reason to leave user variables to users.

Finally, we have avoided naming the variable of the example
‘LIBFOO_LDFLAGS’ (with an underscore) because that would cause Automake
to think that this is actually a per-target variable (like
‘mumble_LDFLAGS’) for some non-declared ‘LIBFOO’ target.

Other Variables

There are other variables in Automake that follow similar principles to
allow user options. For instance, Texinfo rules (*note Texinfo::) use
‘MAKEINFOFLAGS’ and ‘AM_MAKEINFOFLAGS’. Similarly, DejaGnu tests (*note
DejaGnu Tests::) use ‘RUNTESTDEFAULTFLAGS’ and ‘AM_RUNTESTDEFAULTFLAGS’.
The tags and ctags rules (*note Tags::) use ‘ETAGSFLAGS’,
‘AM_ETAGSFLAGS’, ‘CTAGSFLAGS’, and ‘AM_CTAGSFLAGS’. Java rules (*note
Java::) use ‘JAVACFLAGS’ and ‘AM_JAVACFLAGS’. None of these rules
support per-target flags (yet).

To some extent, even ‘AM_MAKEFLAGS’ (*note Subdirectories::) obeys
this naming scheme. The slight difference is that ‘MAKEFLAGS’ is passed
to sub-‘make’s implicitly by ‘make’ itself.

‘ARFLAGS’ (*note A Library::) is usually defined by Automake and has
neither ‘AM_’ nor per-target cousin.

Finally you should not think that the existence of a per-target
variable implies the existence of an ‘AM_’ variable or of a user
variable. For instance, the ‘mumble_LDADD’ per-target variable
overrides the makefile-wide ‘LDADD’ variable (which is not a user
variable), and ‘mumble_LIBADD’ exists only as a per-target variable.
*Note Program and Library Variables::.

27.7 Why are object files sometimes renamed?

This happens when per-target compilation flags are used. Object files
need to be renamed just in case they would clash with object files
compiled from the same sources, but with different flags. Consider the
following example.

 bin_PROGRAMS = true false
 true_SOURCES = generic.c
 true_CPPFLAGS = -DEXIT_CODE=0
 false_SOURCES = generic.c
 false_CPPFLAGS = -DEXIT_CODE=1

Obviously the two programs are built from the same source, but it would
be bad if they shared the same object, because ‘generic.o’ cannot be
built with both ‘-DEXIT_CODE=0’ and ‘-DEXIT_CODE=1’. Therefore
‘automake’ outputs rules to build two different objects:
‘true-generic.o’ and ‘false-generic.o’.

‘automake’ doesn’t actually look whether source files are shared to
decide if it must rename objects. It will just rename all objects of a
target as soon as it sees per-target compilation flags used.

It’s OK to share object files when per-target compilation flags are
not used. For instance, ‘true’ and ‘false’ will both use ‘version.o’ in
the following example.

 AM_CPPFLAGS = -DVERSION=1.0
 bin_PROGRAMS = true false
 true_SOURCES = true.c version.c
 false_SOURCES = false.c version.c

Note that the renaming of objects is also affected by the
‘_SHORTNAME’ variable (*note Program and Library Variables::).

27.8 Per-Object Flags Emulation

 One of my source files needs to be compiled with different flags.  How
 do I do?

Automake supports per-program and per-library compilation flags (see
*note Program and Library Variables:: and *note Flag Variables
Ordering::). With this you can define compilation flags that apply to
all files compiled for a target. For instance, in

 bin_PROGRAMS = foo
 foo_SOURCES = foo.c foo.h bar.c bar.h main.c
 foo_CFLAGS = -some -flags

‘foo-foo.o’, ‘foo-bar.o’, and ‘foo-main.o’ will all be compiled with
‘-some -flags’. (If you wonder about the names of these object files,
see *note Renamed Objects::.) Note that ‘foo_CFLAGS’ gives the flags to
use when compiling all the C sources of the program ‘foo’, it has
nothing to do with ‘foo.c’ or ‘foo-foo.o’ specifically.

What if ‘foo.c’ needs to be compiled into ‘foo.o’ using some specific
flags, that none of the other files requires? Obviously per-program
flags are not directly applicable here. Something like per-object flags
are expected, i.e., flags that would be used only when creating
‘foo-foo.o’. Automake does not support that, however this is easy to
simulate using a library that contains only that object, and compiling
this library with per-library flags.

 bin_PROGRAMS = foo
 foo_SOURCES = bar.c bar.h main.c
 foo_CFLAGS = -some -flags
 foo_LDADD = libfoo.a
 noinst_LIBRARIES = libfoo.a
 libfoo_a_SOURCES = foo.c foo.h
 libfoo_a_CFLAGS = -some -other -flags

Here ‘foo-bar.o’ and ‘foo-main.o’ will all be compiled with ‘-some
-flags’, while ‘libfoo_a-foo.o’ will be compiled using ‘-some -other
-flags’. Eventually, all three objects will be linked to form ‘foo’.

This trick can also be achieved using Libtool convenience libraries,
for instance ‘noinst_LTLIBRARIES = libfoo.la’ (*note Libtool Convenience
Libraries::).

Another tempting idea to implement per-object flags is to override
the compile rules ‘automake’ would output for these files. Automake
will not define a rule for a target you have defined, so you could think
about defining the ‘foo-foo.o: foo.c’ rule yourself. We recommend
against this, because this is error prone. For instance, if you add
such a rule to the first example, it will break the day you decide to
remove ‘foo_CFLAGS’ (because ‘foo.c’ will then be compiled as ‘foo.o’
instead of ‘foo-foo.o’, *note Renamed Objects::). Also in order to
support dependency tracking, the two ‘.o’/‘.obj’ extensions, and all the
other flags variables involved in a compilation, you will end up
modifying a copy of the rule previously output by ‘automake’ for this
file. If a new release of Automake generates a different rule, your
copy will need to be updated by hand.

27.9 Handling Tools that Produce Many Outputs

This section describes a ‘make’ idiom that can be used when a tool
produces multiple output files. It is not specific to Automake and can
be used in ordinary ‘Makefile’s.

Suppose we have a program called ‘foo’ that will read one file called
‘data.foo’ and produce two files named ‘data.c’ and ‘data.h’. We want
to write a ‘Makefile’ rule that captures this one-to-two dependency.

The naive rule is incorrect:

 # This is incorrect.
 data.c data.h: data.foo
         foo data.foo

What the above rule really says is that ‘data.c’ and ‘data.h’ each
depend on ‘data.foo’, and can each be built by running ‘foo data.foo’.
In other words it is equivalent to:

 # We do not want this.
 data.c: data.foo
         foo data.foo
 data.h: data.foo
         foo data.foo

which means that ‘foo’ can be run twice. Usually it will not be run
twice, because ‘make’ implementations are smart enough to check for the
existence of the second file after the first one has been built; they
will therefore detect that it already exists. However there are a few
situations where it can run twice anyway:

• The most worrying case is when running a parallel ‘make’. If
‘data.c’ and ‘data.h’ are built in parallel, two ‘foo data.foo’
commands will run concurrently. This is harmful.
• Another case is when the dependency (here ‘data.foo’) is (or
depends upon) a phony target.

A solution that works with parallel ‘make’ but not with phony
dependencies is the following:

 data.c data.h: data.foo
         foo data.foo
 data.h: data.c

The above rules are equivalent to

 data.c: data.foo
         foo data.foo
 data.h: data.foo data.c
         foo data.foo

therefore a parallel ‘make’ will have to serialize the builds of
‘data.c’ and ‘data.h’, and will detect that the second is no longer
needed once the first is over.

Using this pattern is probably enough for most cases. However it
does not scale easily to more output files (in this scheme all output
files must be totally ordered by the dependency relation), so we will
explore a more complicated solution.

Another idea is to write the following:

 # There is still a problem with this one.
 data.c: data.foo
         foo data.foo
 data.h: data.c

The idea is that ‘foo data.foo’ is run only when ‘data.c’ needs to be
updated, but we further state that ‘data.h’ depends upon ‘data.c’. That
way, if ‘data.h’ is required and ‘data.foo’ is out of date, the
dependency on ‘data.c’ will trigger the build.

This is almost perfect, but suppose we have built ‘data.h’ and
‘data.c’, and then we erase ‘data.h’. Then, running ‘make data.h’ will
not rebuild ‘data.h’. The above rules just state that ‘data.c’ must be
up-to-date with respect to ‘data.foo’, and this is already the case.

What we need is a rule that forces a rebuild when ‘data.h’ is
missing. Here it is:

 data.c: data.foo
         foo data.foo
 data.h: data.c
 ## Recover from the removal of $@
         @if test -f $@; then :; else \
           rm -f data.c; \
           $(MAKE) $(AM_MAKEFLAGS) data.c; \
         fi

The above scheme can be extended to handle more outputs and more
inputs. One of the outputs is selected to serve as a witness to the
successful completion of the command, it depends upon all inputs, and
all other outputs depend upon it. For instance, if ‘foo’ should
additionally read ‘data.bar’ and also produce ‘data.w’ and ‘data.x’, we
would write:

 data.c: data.foo data.bar
         foo data.foo data.bar
 data.h data.w data.x: data.c
 ## Recover from the removal of $@
         @if test -f $@; then :; else \
           rm -f data.c; \
           $(MAKE) $(AM_MAKEFLAGS) data.c; \
         fi

However there are now three minor problems in this setup. One is
related to the timestamp ordering of ‘data.h’, ‘data.w’, ‘data.x’, and
‘data.c’. Another one is a race condition if a parallel ‘make’ attempts
to run multiple instances of the recover block at once. Finally, the
recursive rule breaks ‘make -n’ when run with GNU ‘make’ (as well as
some other ‘make’ implementations), as it may remove ‘data.h’ even when
it should not (*note How the ‘MAKE’ Variable Works: (make)MAKE
Variable.).

Let us deal with the first problem. ‘foo’ outputs four files, but we
do not know in which order these files are created. Suppose that
‘data.h’ is created before ‘data.c’. Then we have a weird situation.
The next time ‘make’ is run, ‘data.h’ will appear older than ‘data.c’,
the second rule will be triggered, a shell will be started to execute
the ‘if…fi’ command, but actually it will just execute the ‘then’
branch, that is: nothing. In other words, because the witness we
selected is not the first file created by ‘foo’, ‘make’ will start a
shell to do nothing each time it is run.

A simple riposte is to fix the timestamps when this happens.

 data.c: data.foo data.bar
         foo data.foo data.bar
 data.h data.w data.x: data.c
         @if test -f $@; then \
           touch $@; \
         else \
 ## Recover from the removal of $@
           rm -f data.c; \
           $(MAKE) $(AM_MAKEFLAGS) data.c; \
         fi

Another solution is to use a different and dedicated file as witness,
rather than using any of ‘foo’’s outputs.

 data.stamp: data.foo data.bar
         @rm -f data.tmp
         @touch data.tmp
         foo data.foo data.bar
         @mv -f data.tmp $@
 data.c data.h data.w data.x: data.stamp
 ## Recover from the removal of $@
         @if test -f $@; then :; else \
           rm -f data.stamp; \
           $(MAKE) $(AM_MAKEFLAGS) data.stamp; \
         fi

‘data.tmp’ is created before ‘foo’ is run, so it has a timestamp
older than output files output by ‘foo’. It is then renamed to
‘data.stamp’ after ‘foo’ has run, because we do not want to update
‘data.stamp’ if ‘foo’ fails.

This solution still suffers from the second problem: the race
condition in the recover rule. If, after a successful build, a user
erases ‘data.c’ and ‘data.h’, and runs ‘make -j’, then ‘make’ may start
both recover rules in parallel. If the two instances of the rule
execute ‘$(MAKE) $(AM_MAKEFLAGS) data.stamp’ concurrently the build is
likely to fail (for instance, the two rules will create ‘data.tmp’, but
only one can rename it).

Admittedly, such a weird situation does not arise during ordinary
builds. It occurs only when the build tree is mutilated. Here ‘data.c’
and ‘data.h’ have been explicitly removed without also removing
‘data.stamp’ and the other output files. ‘make clean; make’ will always
recover from these situations even with parallel makes, so you may
decide that the recover rule is solely to help non-parallel make users
and leave things as-is. Fixing this requires some locking mechanism to
ensure only one instance of the recover rule rebuilds ‘data.stamp’. One
could imagine something along the following lines.

 data.c data.h data.w data.x: data.stamp
 ## Recover from the removal of $@
         @if test -f $@; then :; else \
           trap 'rm -rf data.lock data.stamp' 1 2 13 15; \
 ## mkdir is a portable test-and-set
           if mkdir data.lock 2>/dev/null; then \
 ## This code is being executed by the first process.
             rm -f data.stamp; \
             $(MAKE) $(AM_MAKEFLAGS) data.stamp; \
             result=$$?; rm -rf data.lock; exit $$result; \
           else \
 ## This code is being executed by the follower processes.
 ## Wait until the first process is done.
             while test -d data.lock; do sleep 1; done; \
 ## Succeed if and only if the first process succeeded.
             test -f data.stamp; \
           fi; \
         fi

Using a dedicated witness, like ‘data.stamp’, is very handy when the
list of output files is not known beforehand. As an illustration,
consider the following rules to compile many ‘*.el’ files into ‘*.elc’
files in a single command. It does not matter how ‘ELFILES’ is defined
(as long as it is not empty: empty targets are not accepted by POSIX).

 ELFILES = one.el two.el three.el ...
 ELCFILES = $(ELFILES:=c)

 elc-stamp: $(ELFILES)
         @rm -f elc-temp
         @touch elc-temp
         $(elisp_comp) $(ELFILES)
         @mv -f elc-temp $@

 $(ELCFILES): elc-stamp
         @if test -f $@; then :; else \
 ## Recover from the removal of $@
           trap 'rm -rf elc-lock elc-stamp' 1 2 13 15; \
           if mkdir elc-lock 2>/dev/null; then \
 ## This code is being executed by the first process.
             rm -f elc-stamp; \
             $(MAKE) $(AM_MAKEFLAGS) elc-stamp; \
             rmdir elc-lock; \
           else \
 ## This code is being executed by the follower processes.
 ## Wait until the first process is done.
             while test -d elc-lock; do sleep 1; done; \
 ## Succeed if and only if the first process succeeded.
             test -f elc-stamp; exit $$?; \
           fi; \
         fi

These solutions all still suffer from the third problem, namely that
they break the promise that ‘make -n’ should not cause any actual
changes to the tree. For those solutions that do not create lock files,
it is possible to split the recover rules into two separate recipe
commands, one of which does all work but the recursion, and the other
invokes the recursive ‘$(MAKE)’. The solutions involving locking could
act upon the contents of the ‘MAKEFLAGS’ variable, but parsing that
portably is not easy (*note (autoconf)The Make Macro MAKEFLAGS::). Here
is an example:

 ELFILES = one.el two.el three.el ...
 ELCFILES = $(ELFILES:=c)

 elc-stamp: $(ELFILES)
         @rm -f elc-temp
         @touch elc-temp
         $(elisp_comp) $(ELFILES)
         @mv -f elc-temp $@

 $(ELCFILES): elc-stamp
 ## Recover from the removal of $@
         @dry=; for f in x $$MAKEFLAGS; do \
           case $$f in \
             *=*|--*);; \
             *n*) dry=:;; \
           esac; \
         done; \
         if test -f $@; then :; else \
           $$dry trap 'rm -rf elc-lock elc-stamp' 1 2 13 15; \
           if $$dry mkdir elc-lock 2>/dev/null; then \
 ## This code is being executed by the first process.
             $$dry rm -f elc-stamp; \
             $(MAKE) $(AM_MAKEFLAGS) elc-stamp; \
             $$dry rmdir elc-lock; \
           else \
 ## This code is being executed by the follower processes.
 ## Wait until the first process is done.
             while test -d elc-lock && test -z "$$dry"; do \
               sleep 1; \
             done; \
 ## Succeed if and only if the first process succeeded.
             $$dry test -f elc-stamp; exit $$?; \
           fi; \
         fi

For completeness it should be noted that GNU ‘make’ is able to
express rules with multiple output files using pattern rules (*note
Pattern Rule Examples: (make)Pattern Examples.). We do not discuss
pattern rules here because they are not portable, but they can be
convenient in packages that assume GNU ‘make’.

27.10 Installing to Hard-Coded Locations

 My package needs to install some configuration file.  I tried to use
 the following rule, but ‘make distcheck’ fails.  Why?

      # Do not do this.
      install-data-local:
              $(INSTALL_DATA) $(srcdir)/afile $(DESTDIR)/etc/afile

 My package needs to populate the installation directory of another
 package at install-time.  I can easily compute that installation
 directory in ‘configure’, but if I install files therein,
 ‘make distcheck’ fails.  How else should I do?

These two setups share their symptoms: ‘make distcheck’ fails because
they are installing files to hard-coded paths. In the later case the
path is not really hard-coded in the package, but we can consider it to
be hard-coded in the system (or in whichever tool that supplies the
path). As long as the path does not use any of the standard directory
variables (‘$(prefix)’, ‘$(bindir)’, ‘$(datadir)’, etc.), the effect
will be the same: user-installations are impossible.

As a (non-root) user who wants to install a package, you usually have
no right to install anything in ‘/usr’ or ‘/usr/local’. So you do
something like ‘./configure –prefix /usr’ to install a package in your
own ‘
/usr’ tree.

If a package attempts to install something to some hard-coded path
(e.g., ‘/etc/afile’), regardless of this ‘–prefix’ setting, then the
installation will fail. ‘make distcheck’ performs such a ‘–prefix’
installation, hence it will fail too.

Now, there are some easy solutions.

The above ‘install-data-local’ example for installing ‘/etc/afile’
would be better replaced by

 sysconf_DATA = afile

by default ‘sysconfdir’ will be ‘$(prefix)/etc’, because this is what
the GNU Standards require. When such a package is installed on an FHS
compliant system, the installer will have to set ‘–sysconfdir=/etc’.
As the maintainer of the package you should not be concerned by such
site policies: use the appropriate standard directory variable to
install your files so that the installer can easily redefine these
variables to match their site conventions.

Installing files that should be used by another package is slightly
more involved. Let’s take an example and assume you want to install a
shared library that is a Python extension module. If you ask Python
where to install the library, it will answer something like this:

 % python -c 'from distutils import sysconfig;
              print sysconfig.get_python_lib(1,0)'
 /usr/lib/python2.5/site-packages

If you indeed use this absolute path to install your shared library,
non-root users will not be able to install the package, hence distcheck
fails.

Let’s do better. The ‘sysconfig.get_python_lib()’ function actually
accepts a third argument that will replace Python’s installation prefix.

 % python -c 'from distutils import sysconfig;
              print sysconfig.get_python_lib(1,0,"${exec_prefix}")'
 ${exec_prefix}/lib/python2.5/site-packages

You can also use this new path. If you do
• root users can install your package with the same ‘–prefix’ as
Python (you get the behavior of the previous attempt)

• non-root users can install your package too, they will have the
extension module in a place that is not searched by Python but they
can work around this using environment variables (and if you
installed scripts that use this shared library, it’s easy to tell
Python were to look in the beginning of your script, so the script
works in both cases).

The ‘AM_PATH_PYTHON’ macro uses similar commands to define
‘$(pythondir)’ and ‘$(pyexecdir)’ (*note Python::).

Of course not all tools are as advanced as Python regarding that
substitution of PREFIX. So another strategy is to figure the part of
the installation directory that must be preserved. For instance, here
is how ‘AM_PATH_LISPDIR’ (*note Emacs Lisp::) computes ‘$(lispdir)’:

 $EMACS -batch -Q -eval '(while load-path
   (princ (concat (car load-path) "\n"))
   (setq load-path (cdr load-path)))' >conftest.out
 lispdir=`sed -n
   -e 's,/$,,'
   -e '/.*\/lib\/x*emacs\/site-lisp$/{
         s,.*/lib/\(x*emacs/site-lisp\)$,${libdir}/\1,;p;q;
       }'
   -e '/.*\/share\/x*emacs\/site-lisp$/{
         s,.*/share/\(x*emacs/site-lisp\),${datarootdir}/\1,;p;q;
       }'
   conftest.out`

I.e., it just picks the first directory that looks like
‘*/lib/emacs/site-lisp’ or ‘/share/*emacs/site-lisp’ in the search
path of emacs, and then substitutes ‘${libdir}’ or ‘${datadir}’
appropriately.

The emacs case looks complicated because it processes a list and
expects two possible layouts, otherwise it’s easy, and the benefits for
non-root users are really worth the extra ‘sed’ invocation.

27.11 Debugging Make Rules

The rules and dependency trees generated by ‘automake’ can get rather
complex, and leave the developer head-scratching when things don’t work
as expected. Besides the debug options provided by the ‘make’ command
(*note (make)Options Summary::), here’s a couple of further hints for
debugging makefiles generated by ‘automake’ effectively:

• If less verbose output has been enabled in the package with the use
of silent rules (*note Automake Silent Rules::), you can use ‘make
V=1’ to see the commands being executed.
• ‘make -n’ can help show what would be done without actually doing
it. Note however, that this will still execute commands prefixed
with ‘+’, and, when using GNU ‘make’, commands that contain the
strings ‘$(MAKE)’ or ‘${MAKE}’ (*note (make)Instead of
Execution::). Typically, this is helpful to show what recursive
rules would do, but it means that, in your own rules, you should
not mix such recursion with actions that change any files.(1)
Furthermore, note that GNU ‘make’ will update prerequisites for the
‘Makefile’ file itself even with ‘-n’ (*note (make)Remaking
Makefiles::).
• ‘make SHELL=”/bin/bash -vx”’ can help debug complex rules. *Note
(autoconf)The Make Macro SHELL::, for some portability quirks
associated with this construct.
• ‘echo ‘print: ; @echo “$(VAR)”‘ | make -f Makefile -f - print’ can
be handy to examine the expanded value of variables. You may need
to use a target other than ‘print’ if that is already used or a
file with that name exists.
http://bashdb.sourceforge.net/remake/ provides a modified GNU
‘make’ command called ‘remake’ that copes with complex GNU
‘make’-specific Makefiles and allows to trace execution, examine
variables, and call rules interactively, much like a debugger.

———- Footnotes ———-

(1) Automake’s ‘dist’ and ‘distcheck’ rules had a bug in this regard
in that they created directories even with ‘-n’, but this has been fixed
in Automake 1.11.

27.12 Reporting Bugs

Most nontrivial software has bugs. Automake is no exception. Although
we cannot promise we can or will fix a bug, and we might not even agree
that it is a bug, we want to hear about problems you encounter. Often
we agree they are bugs and want to fix them.

To make it possible for us to fix a bug, please report it. In order
to do so effectively, it helps to know when and how to do it.

Before reporting a bug, it is a good idea to see if it is already
known. You can look at the GNU Bug Tracker (https://debbugs.gnu.org/)
and the bug-automake mailing list archives
(https://lists.gnu.org/archive/html/bug-automake/) for previous bug
reports. We previously used a Gnats database
(http://sourceware.org/cgi-bin/gnatsweb.pl?database=automake) for bug
tracking, so some bugs might have been reported there already. Please
do not use it for new bug reports, however.

If the bug is not already known, it should be reported. It is very
important to report bugs in a way that is useful and efficient. For
this, please familiarize yourself with How to Report Bugs Effectively
(http://www.chiark.greenend.org.uk/~sgtatham/bugs.html) and How to Ask
Questions the Smart Way
(http://catb.org/~esr/faqs/smart-questions.html). This helps you and
developers to save time which can then be spent on fixing more bugs and
implementing more features.

For a bug report, a feature request or other suggestions, please send
email to bug-automake@gnu.org. This will then open a new bug in the
bug tracker (https://debbugs.gnu.org/automake). Be sure to include the
versions of Autoconf and Automake that you use. Ideally, post a minimal
‘Makefile.am’ and ‘configure.ac’ that reproduces the problem you
encounter. If you have encountered test suite failures, please attach
the ‘test-suite.log’ file.

[TOC]

天文学推荐读物

射电天文学

下面的这些书大部分为射电天文学,部分包含射电干涉。

  • Synthesis Imaging in Radio Astronomy II, G.B Taylor, C.L. Carilli, & R.A. Perley eds, Astronomical Society of the Pacific Conference Series Volume 180.
  • The Fourier Transform and its Applications, R.N. Bracewell.
  • Interferometry and Synthesis in Radio Astronomy, A.R. Thompson, J.M. Moran & G.W. Swenson.
  • Radio Astronomy, J.D. Kraus.
  • Radiotelescopes, W.N. Christiansen & J.A. Högbom.
  • Tools of Radio Astronomy, K. Rohlfs & T.L. Wilson.
  • Essential Radio Astronomy, James J. Condon & Scott M. Ransom,2016
  • Very Long Baseline Interferometry Techniques and Applications, Marcello Felli, Ralph E. Spencer
  • Very Long Baseline Interferometry and the VLBA, Napier, Diamond & Zensus

用射电望远镜可以做那些科学,参考一下书籍:

  • Galactic and Extragalactic Radio Astronomy, G.L. Verschuur & K.I. Kellermann eds.

  • An Introduction to Radio Astronomy, B.F. Burke & F. Graham-Smith.

  • Leiden Radio Classes materials

参考:www.atnf.csiro.au

焕然一新的 su

.. note::
几处早莺争暖树,谁家新燕啄春泥。
白居易《钱塘湖春行》

su的官方定义为:

su - run a command with substitute user and group ID

一言以蔽之,su - user 能切换到一个用户中去执行一个指令或脚本,而 su 应该是switch user的概念,这个命令可以让我们开启一个进程,赋予新的身份、用户ID、组ID等关联的各种读写访问权限。

所以,理所当然是需要密码的介入的。

而如果没有user的参数,默认就是进入到root账户了。

命令格式

该命令格式如下所示:

1
$ su [options...] [-] [user [args...]]

其中一些比较重要的选项如下所示:

  • -f –fast:快速启动,不读取启动文件,这个取决于具体的shell。
  • -l–login:这个参数让你有焕然一新的感觉,基本类似于重新登录。如果不指定,默认情况下是root环境。
  • -g--group:指定主要组,这个只能由root用户指定。
  • -m-p–preserve-environment:保留环境变量,除非指定了-l。
  • -s SHELL--shell=SHELL:切换使用的SHELL。

切换到用于user执行命令command

执行如下命令,会切换到user用户,然后执行ls命令

1
$ su - user -c ls

切换使用的SHELL

不同的人,可能对不同的SHELL情有独钟,A喜欢bash,B可能喜欢csh,这个就可以通过-s来切换,如下可以切换到csh

1
$ su - user -s /bin/csh

关于SHELL,根据安装的环境不同,基本有如下几个:

  • /bin/bash
  • /bin/tcsh
  • /usr/bin/sh
  • /bin/csh
  • /sbin/nologin
  • /bin/sh

加与不加-的区别还是有的 su [user] 和 su - [user]

su [user]切换到其他用户,但是不切换环境变量,su - [user]则是完整的切换到新的用户环境。

如:

1
2
3
4
5
6
7
8
9
10
11
12
$ pwd
/root

$ su oper
$ pwd
/root


$ su - oper
Password:
$ pwd
/home/oper

所以大家在切换用户时,尽量用su - [user],否则可能会出现环境变量不对的问题。

简介

su - user 能切换到一个用户中去执行一个指令或脚本

命令格式

该命令格式如下所示:

1
$ su [options...] [-] [user [args...]]

其中一些比较重要的选项如下所示:

  • -f –fast:快速启动,不读取启动文件,这个取决于具体的shell。
  • -l–login:这个参数让你有焕然一新的感觉,基本类似于重新登录。如果不指定,默认情况下是root环境。
  • -g--group:指定主要组,这个只能由root用户指定。
  • -m-p–preserve-environment:保留环境变量,除非指定了-l。
  • -s SHELL--shell=SHELL:切换使用的SHELL。

切换到用于user执行命令command

执行如下命令,会切换到user用户,然后执行ls命令

1
$ su - user -c ls

切换使用的SHELL

不同的人,可能对不同的SHELL情有独钟,A喜欢bash,B可能喜欢csh,这个就可以通过-s来切换,如下可以切换到csh

1
$ su - user -s /bin/csh

关于SHELL,根据安装的环境不同,基本有如下几个:

  • /bin/bash
  • /bin/tcsh
  • /usr/bin/sh
  • /bin/csh
  • /sbin/nologin
  • /bin/sh

su [user] 和 su - [user]的区别:

su [user]切换到其他用户,但是不切换环境变量,su - [user]则是完整的切换到新的用户环境。

如:

1
2
3
4
5
6
7
8
9
10
11
12
$ pwd
/root

$ su oper
$ pwd
/root


$ su - oper
Password:
$ pwd
/home/oper

所以大家在切换用户时,尽量用su - [user],否则可能会出现环境变量不对的问题。

备份升级

1
2
sudo gitlab-rake gitlab:backup:create STRATEGY=copy  
sudo yum install -y gitlab-ce

如果是从8.12的版本进行升级,首先需要升级到8.17.7

1
2
3
4
5
6
7
sudo yum install gitlab-ce-8.17.7
# 此时就可以升级到最新的release版本了
sudo yum install gitlab-ce-10.0.0
sudo yum install gitlab-ce-11.0.0
#sudo yum install gitlab-ce-12.0.0
#sudo yum install gitlab-ce-13.0.0
sudo yum install -y gitlab-ce

配置并启动GitLab

1
2
3
4
5
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart

# 偶尔提示需要升级数据库
sudo gitlab-ctl pg-upgrade

浏览并登陆

此时就可以设置用户名和密码了。

Enjoy!!!