Discussion:
Review and a bit of other help request
(too old to reply)
Konstantin Gizdov
2017-03-17 13:48:19 UTC
Permalink
Hi all,

So I have now for quite a while been the maintainer for the following:

root
root-extra
pythia8
xrootd-abi0 (this exists as a work around for other maintainer not updating
package)
unuran
root5 (legacy and poorly supported for GCC 5+, need help here especially)

and just adopted pythia.

I've actually put quite a bit of work in some of them. With the latest
changes upstream here and there, I am planning to re-optimize the builds,
but I wanted to first ask for some input from this list and gather some
suggestions.

My main focus is making CERN's ROOT and it's relevant
dependencies/extensions work on Arch. I started and mainly concentrate on
root and root-extra. Pythia, XRootD, Unuran are such extensions which were
not available or broken in Arch. So I had to make 'pythia8' and
'xrootd-abi0' as workarounds. I have now finally been able to adopt
'pythia' and plan on making a major re-write and optimization. I still have
to keep 'xrootd-abi0' as the current maintainer does not really update or
fix his package when new versions/problems arise. I do not plan on making
an orphan request, as I do not want to cause trouble for people.

However, I do wish to make the current environment as good as possible for
the people that actually use it and would welcome any input from you.
Thanks in advance.

Apart from that I wanted to understand better if and how package signing
works with AUR. I tried the wiki and a bit of Google, but so far it seems
package signing is only for official repos/trusted users. I did not want to
try it out myself before getting some advice as I was afraid messing up
will prevent people from installing them.

Regards,
Konstantin
Sebastian Lau via aur-general
2017-03-17 16:38:55 UTC
Permalink
Post by Konstantin Gizdov
Hi all,
root
root-extra
pythia8
xrootd-abi0 (this exists as a work around for other maintainer not updating
package)
unuran
root5 (legacy and poorly supported for GCC 5+, need help here especially)
and just adopted pythia.
I've actually put quite a bit of work in some of them. With the latest
changes upstream here and there, I am planning to re-optimize the builds,
but I wanted to first ask for some input from this list and gather some
suggestions.
My main focus is making CERN's ROOT and it's relevant
dependencies/extensions work on Arch. I started and mainly concentrate on
root and root-extra. Pythia, XRootD, Unuran are such extensions which were
not available or broken in Arch. So I had to make 'pythia8' and
'xrootd-abi0' as workarounds. I have now finally been able to adopt
'pythia' and plan on making a major re-write and optimization. I still have
to keep 'xrootd-abi0' as the current maintainer does not really update or
fix his package when new versions/problems arise. I do not plan on making
an orphan request, as I do not want to cause trouble for people.
However, I do wish to make the current environment as good as possible for
the people that actually use it and would welcome any input from you.
Thanks in advance.
Apart from that I wanted to understand better if and how package signing
works with AUR. I tried the wiki and a bit of Google, but so far it seems
package signing is only for official repos/trusted users. I did not want to
try it out myself before getting some advice as I was afraid messing up
will prevent people from installing them.
Regards,
Konstantin
Hello Constantin,

a few things on package signing and repositories:

- for AUR there is no need to sign your PKGBUILDs as your identity is
verified when you push to git with your ssh key.

- If you want to pre-compile the created packages and host them
somewhere, you have the possibility of using `makepkg --sign` which will
use the gpg-key that you configured in your `/etc/makepkg.conf`. For
creating repositories you should use `man repo-add` (have a look at the
example, it is like `repo-add $REPONAME.db.tar.xz *.pkg.tar.xz` in your
repo's directory.

Regards,
Sebastian
Eli Schwartz via aur-general
2017-03-17 17:28:44 UTC
Permalink
Post by Konstantin Gizdov
xrootd-abi0 (this exists as a work around for other maintainer not updating
package)
Don't do this. It violates the rules of the AUR and now that you have
drawn our attention to it, expect someone to file a deletion request.
Post by Konstantin Gizdov
[...] Pythia, XRootD, Unuran are such extensions which were
not available or broken in Arch. So I had to make 'pythia8' and
'xrootd-abi0' as workarounds. I have now finally been able to adopt
'pythia' and plan on making a major re-write and optimization. I still have
to keep 'xrootd-abi0' as the current maintainer does not really update or
fix his package when new versions/problems arise. I do not plan on making
an orphan request, as I do not want to cause trouble for people.
However, I do wish to make the current environment as good as possible for
the people that actually use it and would welcome any input from you.
Thanks in advance.
What you say makes no sense. You want it to work well, but the current
maintainer[s] is not actually maintaining the package[s]? And yet you
don't want to file an orphan request because somehow, in some
unidentified manner, an abandoned package getting a new maintainer
constitutes "trouble for people"?

So instead you violate the rules of the AUR by making forked packages
and confusing people about what is actually needed or available, you
trick people into potentially using the *real* but non-working packages,
and fail dismally at "mak[ing] the current environment as good as possible".

Good job! /s

...

Now, go ahead and file that orphan request you should have filed a long
time ago, apparently.
Post by Konstantin Gizdov
Apart from that I wanted to understand better if and how package signing
works with AUR. I tried the wiki and a bit of Google, but so far it seems
package signing is only for official repos/trusted users. I did not want to
try it out myself before getting some advice as I was afraid messing up
will prevent people from installing them.
Signing is for anyone who wants to sign things. The real question is,
what are you trying to sign?

- Built packages ==> `makepkg --sign`, or retroactively there is always
`gpg --detach-sign builtpkg-1.0-1-any.pkg.tar.xz`
- self-hosted package repository ==> repo-add --sign
- PKGBUILD ==> they don't need to be signed since users are expected to
read them... but there is always `git config commit.gpgsign true`
which users are free to check although AUR helpers certainly won't
- PKGBUILD source=() downloads ==> convince upstream to sign their
release tarballs
--
Eli Schwartz
Konstantin Gizdov
2017-03-17 18:17:08 UTC
Permalink
Hi Eli and Sebastian,

OK, I see the orphan request got approved. Certainly, wasn't looking to
draw outrage, but get advice on what the appropriate action. I will update
the relevant pythia, xrootd and submit deletion request myself for the
others.

As to the package signing, I already know how to detach sign. I also know
about the source signing. What is not clear to me is repo-add --sign. The
docs say it will update 'the package database'. Which package database?
Does AUR keep such info? I though that was for Trusted Users and official
repos.

What I want to do is essentially to provide a convenient way for people to
build or directly download pre-built packages, if they choose to, and be
able to verify them, without too much hassle. What do you recommend? Should
I just make a *-bin version on AUR with my signature and detach sign the
binaries on my own repo? I thought this was also not the AUR way?

Could I get someone's workflow for signed packages as an example?

Regards,
Konstantin

On Fri, Mar 17, 2017 at 5:28 PM, Eli Schwartz via aur-general <
Post by Konstantin Gizdov
Post by Konstantin Gizdov
xrootd-abi0 (this exists as a work around for other maintainer not
updating
Post by Konstantin Gizdov
package)
Don't do this. It violates the rules of the AUR and now that you have
drawn our attention to it, expect someone to file a deletion request.
Post by Konstantin Gizdov
[...] Pythia, XRootD, Unuran are such extensions which were
not available or broken in Arch. So I had to make 'pythia8' and
'xrootd-abi0' as workarounds. I have now finally been able to adopt
'pythia' and plan on making a major re-write and optimization. I still
have
Post by Konstantin Gizdov
to keep 'xrootd-abi0' as the current maintainer does not really update or
fix his package when new versions/problems arise. I do not plan on making
an orphan request, as I do not want to cause trouble for people.
However, I do wish to make the current environment as good as possible
for
Post by Konstantin Gizdov
the people that actually use it and would welcome any input from you.
Thanks in advance.
What you say makes no sense. You want it to work well, but the current
maintainer[s] is not actually maintaining the package[s]? And yet you
don't want to file an orphan request because somehow, in some
unidentified manner, an abandoned package getting a new maintainer
constitutes "trouble for people"?
So instead you violate the rules of the AUR by making forked packages
and confusing people about what is actually needed or available, you
trick people into potentially using the *real* but non-working packages,
and fail dismally at "mak[ing] the current environment as good as possible".
Good job! /s
...
Now, go ahead and file that orphan request you should have filed a long
time ago, apparently.
Post by Konstantin Gizdov
Apart from that I wanted to understand better if and how package signing
works with AUR. I tried the wiki and a bit of Google, but so far it seems
package signing is only for official repos/trusted users. I did not want
to
Post by Konstantin Gizdov
try it out myself before getting some advice as I was afraid messing up
will prevent people from installing them.
Signing is for anyone who wants to sign things. The real question is,
what are you trying to sign?
- Built packages ==> `makepkg --sign`, or retroactively there is always
`gpg --detach-sign builtpkg-1.0-1-any.pkg.tar.xz`
- self-hosted package repository ==> repo-add --sign
- PKGBUILD ==> they don't need to be signed since users are expected to
read them... but there is always `git config commit.gpgsign true`
which users are free to check although AUR helpers certainly won't
- PKGBUILD source=() downloads ==> convince upstream to sign their
release tarballs
--
Eli Schwartz
Eli Schwartz via aur-general
2017-03-17 19:33:32 UTC
Permalink
Post by Konstantin Gizdov
Hi Eli and Sebastian,
OK, I see the orphan request got approved. Certainly, wasn't looking to
draw outrage, but get advice on what the appropriate action. I will update
the relevant pythia, xrootd and submit deletion request myself for the
others.
Thanks for fixing this yourself. It was less about outrage and more
about being extra-emphatic about what is and isn't appropriate. :)

I save the outrage/abuse for people who have already been told what the
right thing is, and refuse to listen. Everyone makes mistakes, and that
is generally okay as long as it was done in good faith and, upon
realizing the mistake, fixing it.
Post by Konstantin Gizdov
As to the package signing, I already know how to detach sign. I also know
about the source signing. What is not clear to me is repo-add --sign. The
docs say it will update 'the package database'. Which package database?
Does AUR keep such info? I though that was for Trusted Users and official
repos.
What I want to do is essentially to provide a convenient way for people to
build or directly download pre-built packages, if they choose to, and be
able to verify them, without too much hassle. What do you recommend? Should
I just make a *-bin version on AUR with my signature and detach sign the
binaries on my own repo? I thought this was also not the AUR way?
Could I get someone's workflow for signed packages as an example?
No, this is entirely separate from the AUR. See the Wiki page for
"Unofficial user repositories".

Various members of the community host their own prebuilt packages on
their personal servers or whatever, for example, AUR packages that they
use and want to sync on multiple computers, or something that takes a
long compile time and they want to offer in addition to the AUR package.

`repo-add --sign` will allow you to generate a pacman-compatible sync
repository that can be copied/rsynced to your personal server and then
added to pacman.conf to download from your server, while signing the
database itself (it is ideal to sign both the packages, via `makepkg
--sign`, and the sync database itself).
--
Eli Schwartz
Konstantin Gizdov
2017-03-23 01:53:08 UTC
Permalink
Hi again,

So I updated xrootd and pythia and submitted the relevant deletion
requests. Now, can I get some package reviews? Thanks.

Regards,
Konstantin

On Fri, Mar 17, 2017 at 9:33 PM, Eli Schwartz via aur-general <
Post by Konstantin Gizdov
Post by Konstantin Gizdov
Hi Eli and Sebastian,
OK, I see the orphan request got approved. Certainly, wasn't looking to
draw outrage, but get advice on what the appropriate action. I will
update
Post by Konstantin Gizdov
the relevant pythia, xrootd and submit deletion request myself for the
others.
Thanks for fixing this yourself. It was less about outrage and more
about being extra-emphatic about what is and isn't appropriate. :)
I save the outrage/abuse for people who have already been told what the
right thing is, and refuse to listen. Everyone makes mistakes, and that
is generally okay as long as it was done in good faith and, upon
realizing the mistake, fixing it.
Post by Konstantin Gizdov
As to the package signing, I already know how to detach sign. I also know
about the source signing. What is not clear to me is repo-add --sign. The
docs say it will update 'the package database'. Which package database?
Does AUR keep such info? I though that was for Trusted Users and official
repos.
What I want to do is essentially to provide a convenient way for people
to
Post by Konstantin Gizdov
build or directly download pre-built packages, if they choose to, and be
able to verify them, without too much hassle. What do you recommend?
Should
Post by Konstantin Gizdov
I just make a *-bin version on AUR with my signature and detach sign the
binaries on my own repo? I thought this was also not the AUR way?
Could I get someone's workflow for signed packages as an example?
No, this is entirely separate from the AUR. See the Wiki page for
"Unofficial user repositories".
Various members of the community host their own prebuilt packages on
their personal servers or whatever, for example, AUR packages that they
use and want to sync on multiple computers, or something that takes a
long compile time and they want to offer in addition to the AUR package.
`repo-add --sign` will allow you to generate a pacman-compatible sync
repository that can be copied/rsynced to your personal server and then
added to pacman.conf to download from your server, while signing the
database itself (it is ideal to sign both the packages, via `makepkg
--sign`, and the sync database itself).
--
Eli Schwartz
Eli Schwartz via aur-general
2017-03-23 03:13:59 UTC
Permalink
Post by Konstantin Gizdov
Hi again,
So I updated xrootd and pythia and submitted the relevant deletion
requests. Now, can I get some package reviews? Thanks.
I know nothing about the specific packages in question, so I will merely
make some general PKGBUILD comments.

${srcdir} and ${pkgdir} must *always* be shell-quoted, as they are
user-controlled filepaths and can contain whitespace.

update-desktop-database, update-mime-database, gtk-update-icon-cache are
pacman hooks, remove them from the root5 install file. You can probably
also drop the pre-remove stuff by now, thereby getting rid of the
install file altogether. (yay!)

Some of your other install files imply that optional dependencies
require being installed at build-time, in which case you should simply
add them as makedepends. If that support means you cannot then uninstall
them (e.g. linking to shared libraries) then they should not be optional
at all. Automagic dependencies are *evil* and should be explicitly
enabled or explicitly disabled.

You use `[[ -d $dir ]] || mkdir $dir` several times, you can just use
`mkdir -p $dir` which does not error when $dir already exists (and
creates parent directories as needed also...).

Make already knows how to read $MAKEFLAGS, no need to specify it on the
command line.

xrootd uses `cmake ... || return 1`, and `make || return 2`, why???
makepkg already knows how to abort as soon as *any* error occurs.

Do not list make as a makedepends, it is assumed users will have
base-devel already installed for building packages.

In root5, I would probably turn $sys_libs into a bash array on general
principle, since bash knows how to expand them into arguments without
depending on ugly things like word-splitting of a variable.
```
declare -a sys_libs
for sys_lib in ...; do
sys_libs+=("--disable-builtin-${sys_lib}")
done

./configure ... "${sys_libs[@]}"
```
--
Eli Schwartz
Konstantin Gizdov
2017-03-23 20:32:29 UTC
Permalink
Hi Eli,

Thanks for these. As I said before I am indeed planning on cleaning up the
old style stuff. I was more interested in "experts" trying to build them
and then maybe identify problems. I currently use namcap on PKGBUILDs and
finished packages to identify and fix problems, but it's no substitute for
the real thing.

You pointed out that "make" already reads "MAKEFLAGS" on its own. Well, I
only added this, because it didn't for me for some reason. Maybe this was a
bug in the build scripts and has since been fixed. But when I picked up
ROOT and the rest of the packages, I had to manually add "${MAKEFLAGS}" in
order for "make" to accept "-j${nproc}". I will try it again.

You also point out the "hacky" way of dealing with, what I call, optional
make dependencies. So the optional dependencies that you mention can be
uninstalled fine and the packages will continue to work (excluding the
relevant features, of course). However, if the packages are not present at
build time, there is no way to enable those features in the first place.
Since this is AUR and we don't ship binaries, I was not sure how better to
deal with this. Any ideas?

Regards,
Konstantin

On Thu, Mar 23, 2017 at 4:13 AM, Eli Schwartz via aur-general <
Post by Eli Schwartz via aur-general
Post by Konstantin Gizdov
Hi again,
So I updated xrootd and pythia and submitted the relevant deletion
requests. Now, can I get some package reviews? Thanks.
I know nothing about the specific packages in question, so I will merely
make some general PKGBUILD comments.
${srcdir} and ${pkgdir} must *always* be shell-quoted, as they are
user-controlled filepaths and can contain whitespace.
update-desktop-database, update-mime-database, gtk-update-icon-cache are
pacman hooks, remove them from the root5 install file. You can probably
also drop the pre-remove stuff by now, thereby getting rid of the
install file altogether. (yay!)
Some of your other install files imply that optional dependencies
require being installed at build-time, in which case you should simply
add them as makedepends. If that support means you cannot then uninstall
them (e.g. linking to shared libraries) then they should not be optional
at all. Automagic dependencies are *evil* and should be explicitly
enabled or explicitly disabled.
You use `[[ -d $dir ]] || mkdir $dir` several times, you can just use
`mkdir -p $dir` which does not error when $dir already exists (and
creates parent directories as needed also...).
Make already knows how to read $MAKEFLAGS, no need to specify it on the
command line.
xrootd uses `cmake ... || return 1`, and `make || return 2`, why???
makepkg already knows how to abort as soon as *any* error occurs.
Do not list make as a makedepends, it is assumed users will have
base-devel already installed for building packages.
In root5, I would probably turn $sys_libs into a bash array on general
principle, since bash knows how to expand them into arguments without
depending on ugly things like word-splitting of a variable.
```
declare -a sys_libs
for sys_lib in ...; do
sys_libs+=("--disable-builtin-${sys_lib}")
done
```
--
Eli Schwartz
Eli Schwartz via aur-general
2017-03-23 21:33:43 UTC
Permalink
Post by Konstantin Gizdov
You pointed out that "make" already reads "MAKEFLAGS" on its own. Well,
I only added this, because it didn't for me for some reason. Maybe this
was a bug in the build scripts and has since been fixed. But when I
picked up ROOT and the rest of the packages, I had to manually add
"${MAKEFLAGS}" in order for "make" to accept "-j${nproc}". I will try it
again.
`make -j$(nproc)` should be left as a user decision anyway IMHO -- smart
build systems know how to scale up for the number of cores already, and
makepkg.conf *exists* for users to declare things like that.

Note that the default makepkg.conf has a commented-out MAKEFLAGS
variable... you don't get this automatically.
Post by Konstantin Gizdov
You also point out the "hacky" way of dealing with, what I call,
optional make dependencies. So the optional dependencies that you
mention can be uninstalled fine and the packages will continue to work
(excluding the relevant features, of course). However, if the packages
are not present at build time, there is no way to enable those features
in the first place. Since this is AUR and we don't ship binaries, I was
not sure how better to deal with this. Any ideas?
That would be "in which case you should simply add them as makedepends."
That way they will be present at build-time and support will be compiled
in, but the user can then uninstall them (e.g. using `makepkg -sr`)
without harm, and they will be notified via optdepends that they might
want to have them installed at runtime as well to actually make use of
that support.

As a general rule of thumb, Arch policy is that there are no optional
makedepends and we like to compile things with support for everything
possible.
--
Eli Schwartz
Konstantin Gizdov
2017-03-23 22:08:27 UTC
Permalink
Hi Eli,

I am aware of the '/etc/makepkg.conf'. I read the Wiki. That's not what I
am talking about. A make command in a PKGBUILD build() did not accept my
tweaked $MAKEFLAGS. I had to explicitly give 'make $MAKEFLAGS' to get my
options to work. So I added it to the PKGBUILD of my packages. Maybe it
works now without it, but it didn't.

About the makedepends - for Pythia, most of the available flags don't even
have packages in the Arch universe, so I cannot simply declare them
makedepends. The ones that exist are on AUR and I would overstate it if I
said they were maintained. So if I add them as makedepends, no one will be
able to install my package. I don't think this is the answer. I will see
what else can be done.

Regards,
Konstantin
Post by Eli Schwartz via aur-general
Post by Konstantin Gizdov
You pointed out that "make" already reads "MAKEFLAGS" on its own. Well,
I only added this, because it didn't for me for some reason. Maybe this
was a bug in the build scripts and has since been fixed. But when I
picked up ROOT and the rest of the packages, I had to manually add
"${MAKEFLAGS}" in order for "make" to accept "-j${nproc}". I will try it
again.
`make -j$(nproc)` should be left as a user decision anyway IMHO -- smart
build systems know how to scale up for the number of cores already, and
makepkg.conf *exists* for users to declare things like that.
Note that the default makepkg.conf has a commented-out MAKEFLAGS
variable... you don't get this automatically.
Post by Konstantin Gizdov
You also point out the "hacky" way of dealing with, what I call,
optional make dependencies. So the optional dependencies that you
mention can be uninstalled fine and the packages will continue to work
(excluding the relevant features, of course). However, if the packages
are not present at build time, there is no way to enable those features
in the first place. Since this is AUR and we don't ship binaries, I was
not sure how better to deal with this. Any ideas?
That would be "in which case you should simply add them as makedepends."
That way they will be present at build-time and support will be compiled
in, but the user can then uninstall them (e.g. using `makepkg -sr`)
without harm, and they will be notified via optdepends that they might
want to have them installed at runtime as well to actually make use of
that support.
As a general rule of thumb, Arch policy is that there are no optional
makedepends and we like to compile things with support for everything
possible.
--
Eli Schwartz
Eli Schwartz via aur-general
2017-03-24 00:03:09 UTC
Permalink
Post by Konstantin Gizdov
About the makedepends - for Pythia, most of the available flags don't
even have packages in the Arch universe, so I cannot simply declare them
makedepends. The ones that exist are on AUR and I would overstate it if
I said they were maintained. So if I add them as makedepends, no one
will be able to install my package. I don't think this is the answer. I
will see what else can be done.
That doesn't sound good :( but then they don't make sense even as
optdepends.

You can try obtaining maintainership of them, if they are as broken as
all that.
If they have an active maintainer who is slacking on the job, then try
discussing things in the package comments, hopefully they will either
listen to feedback or hand over maintenance to you.

As for the nonexistent ones, do you have plans to package them yourself? :)
--
Eli Schwartz
Konstantin Gizdov
2017-03-28 11:18:04 UTC
Permalink
Hi Eli,

Sorry for the late reply, I have been busy with work and travel.

I am planning on contributing a bit more, yes. But as you know that takes
time and preparation. That's why I wanted someone to look over my
contributions up to now so I am sure I am going forward on a stable basis.
I plan on gradually including all interesting packages that are orphaned or
do not exist. However, that will be some undefined time in the future and
people need their packages working now :D This is why this whole thing is a
bit awkward.

Thanks for all the help and comments. I am sure I will be back for more as
soon as I start taking in more packages.

Regards,
Konstantin

On Fri, Mar 24, 2017 at 1:03 AM, Eli Schwartz via aur-general <
Post by Eli Schwartz via aur-general
Post by Konstantin Gizdov
About the makedepends - for Pythia, most of the available flags don't
even have packages in the Arch universe, so I cannot simply declare them
makedepends. The ones that exist are on AUR and I would overstate it if
I said they were maintained. So if I add them as makedepends, no one
will be able to install my package. I don't think this is the answer. I
will see what else can be done.
That doesn't sound good :( but then they don't make sense even as
optdepends.
You can try obtaining maintainership of them, if they are as broken as
all that.
If they have an active maintainer who is slacking on the job, then try
discussing things in the package comments, hopefully they will either
listen to feedback or hand over maintenance to you.
As for the nonexistent ones, do you have plans to package them yourself? :)
--
Eli Schwartz
Uwe Koloska
2017-03-24 00:41:50 UTC
Permalink
Post by Konstantin Gizdov
I am aware of the '/etc/makepkg.conf'. I read the Wiki. That's not what I
am talking about. A make command in a PKGBUILD build() did not accept my
tweaked $MAKEFLAGS. I had to explicitly give 'make $MAKEFLAGS' to get my
options to work. So I added it to the PKGBUILD of my packages. Maybe it
works now without it, but it didn't.
Just a guess: How did you try to define the flag? If it doesn't exist
(and has been exported before) you have to export it for make to pick it
up from the environment. Setting the variable only makes it available
in the current script (and a PKGBUILD is just a bash script setting
well-known variables and using some predefined functions).

Regards,
Uwe
Konstantin Gizdov
2017-03-28 11:26:17 UTC
Permalink
Hi Uwe,

So I edited my /etc/makepkg.conf to have the following:
...
MAKEFLAGS="-j$(nproc)"
...

I assumed every time 'makepkg' is run, it would source it's environment and
that file as well. I am not sure why it would be otherwise. I normally use
ZSH as my shell, but I fail to see the significance in this case. It being
a bash script or not, ABS should require BASH (or whatever shell it needs)
to be installed and run it as needed. Both BASH and ZSH normally source
their environment on load, thus I still would expect this file to be read
on every 'makepkg' call, because that command forks and creates a new child
process with its own envir. Correct me if I am wrong.

Regards,
Konstantin
Post by Uwe Koloska
Post by Konstantin Gizdov
I am aware of the '/etc/makepkg.conf'. I read the Wiki. That's not what I
am talking about. A make command in a PKGBUILD build() did not accept my
tweaked $MAKEFLAGS. I had to explicitly give 'make $MAKEFLAGS' to get my
options to work. So I added it to the PKGBUILD of my packages. Maybe it
works now without it, but it didn't.
Just a guess: How did you try to define the flag? If it doesn't exist
(and has been exported before) you have to export it for make to pick it
up from the environment. Setting the variable only makes it available
in the current script (and a PKGBUILD is just a bash script setting
well-known variables and using some predefined functions).
Regards,
Uwe
Uwe Koloska
2017-03-28 21:11:34 UTC
Permalink
Hi Konstantin,
Post by Konstantin Gizdov
...
MAKEFLAGS="-j$(nproc)"
...
this by itself only creates a variable in the current shell and not in
the environment. So if you want to use the variable from a process
started by this shell script, you have to export it.
Post by Konstantin Gizdov
I assumed every time 'makepkg' is run, it would source it's environment and
that file as well. I am not sure why it would be otherwise.
It does. Without sourcing, the commands inside the script/file would be
executed in their own context and can't change the context of the
current script.
Post by Konstantin Gizdov
I normally use
ZSH as my shell, but I fail to see the significance in this case. It being
a bash script or not,
It's relevant, because it explains the syntax and semantic used. So if
you know how (ba)sh-scripts work and know the special variables and
command defined for makepkg, you are fine.
Post by Konstantin Gizdov
ABS should require BASH (or whatever shell it needs)
to be installed and run it as needed. Both BASH and ZSH normally source
their environment on load, thus I still would expect this file to be read
on every 'makepkg' call, because that command forks and creates a new child
process with its own envir. Correct me if I am wrong.
That's correct (but I don't understand what you wanna say by the
sentence about "forks and creates a new child").

As I have said, this was only a shot in the dark. My point is (and it
may not be related to your problem, because I don't know your entire
buildscript) that MAKEFLAGS has to be exported if it should be used by a
process started from the buildscript (make in this case).

If 'make' is started in the buildscript with a commandline like
make $MAKEFLAGS other options
then it will work, but if make is expecting this variable in the
environment, then you have to export it before. (For some scripts it
looks like it's working without export, but then the export has been
done before -- it's just a flag on the variable)

Hope this helps
Uwe
Konstantin Gizdov
2017-03-28 22:13:29 UTC
Permalink
Hi Uwe,
Post by Uwe Koloska
Post by Konstantin Gizdov
...
MAKEFLAGS="-j$(nproc)"
...
this by itself only creates a variable in the current shell and not in
the environment. So if you want to use the variable from a process
started by this shell script, you have to export it.
No, it doesn't. The current shell does not actively read/source a recently
changed file (unless configured to do so) and /etc/makepkg.conf is never
run by itself. I am not running my build script bare through my shell, so
that is irrelevant anyhow.
Post by Uwe Koloska
Post by Konstantin Gizdov
I assumed every time 'makepkg' is run, it would source it's
environment and
that file as well. I am not sure why it would be otherwise.
It does. Without sourcing, the commands inside the script/file would be
executed in their own context and can't change the context of the
current script.
Post by Konstantin Gizdov
I normally use
ZSH as my shell, but I fail to see the significance in this case. It
being
Post by Konstantin Gizdov
a bash script or not,
It's relevant, because it explains the syntax and semantic used. So if
you know how (ba)sh-scripts work and know the special variables and
command defined for makepkg, you are fine.
I am not sure what your point is - /usr/bin/makepkg starts with
'#!/usr/bin/bash'. The shell that starts 'makepkg' is completely
irrelevant, because 'makepkg' runs itself through BASH.
Post by Uwe Koloska
ABS should require BASH (or whatever shell it needs)
Post by Konstantin Gizdov
to be installed and run it as needed. Both BASH and ZSH normally source
their environment on load, thus I still would expect this file to be read
on every 'makepkg' call, because that command forks and creates a new
child
Post by Konstantin Gizdov
process with its own envir. Correct me if I am wrong.
That's correct (but I don't understand what you wanna say by the
sentence about "forks and creates a new child").
I am saying that when you type a command in a shell, the shell forks its
own process and runs the requested command in a child - Unix fork() and
exec(). Therefore, the child in this case - makepkg - is a script that
should source its own environment and use it as it needs.

I guess what I am trying to say with all of this is the following -
uncommenting and configuring MAKEFLAGS in /etc/makepkg.conf should be
enough for the 'make' call in any PKGBUILD to know about the correct
MAKEFLAGS.

I can actually confirm the last bit by echo-ing the $MAKEFLAGS from within
a PKGBUILD and then calling 'make' where the Makefile only calls '@echo
${MAKEFLAGS}'. So I am pretty sure that I did not and should not need to
export anything anywhere (unless I run 'make' manually by myself, which is
not what we're talking about), because it has already been set and
available.

If 'make' is started in the buildscript with a commandline like
Post by Uwe Koloska
make $MAKEFLAGS other options
then it will work, but if make is expecting this variable in the
environment, then you have to export it before. (For some scripts it
looks like it's working without export, but then the export has been
done before -- it's just a flag on the variable)
I feel like you are talking about something different from what the
conversation originally was about. Maybe I am wrong. I am only referring to
the difference between 'make' and 'make $MAKEFLAGS' calls in a PKGBUILD as
executed with /usr/bin/makepkg. The first option was failing for me
consistently and this is why I added $MAKEFLAGS as an argument to the
'make' call. Maybe some packages "clean" makeflags through CMake. That was
my argument.

In other words - I will not be removing $MAKEFLAGS from the 'make' call in
my PKGBUILDs untill I am sure I can consistently reproduce the build with
the correct flags. Hopefully soon.

Thanks for the help, though.

Regards,
Konstantin
Post by Uwe Koloska
Hi Konstantin,
Post by Konstantin Gizdov
...
MAKEFLAGS="-j$(nproc)"
...
this by itself only creates a variable in the current shell and not in
the environment. So if you want to use the variable from a process
started by this shell script, you have to export it.
Post by Konstantin Gizdov
I assumed every time 'makepkg' is run, it would source it's environment
and
Post by Konstantin Gizdov
that file as well. I am not sure why it would be otherwise.
It does. Without sourcing, the commands inside the script/file would be
executed in their own context and can't change the context of the
current script.
Post by Konstantin Gizdov
I normally use
ZSH as my shell, but I fail to see the significance in this case. It
being
Post by Konstantin Gizdov
a bash script or not,
It's relevant, because it explains the syntax and semantic used. So if
you know how (ba)sh-scripts work and know the special variables and
command defined for makepkg, you are fine.
Post by Konstantin Gizdov
ABS should require BASH (or whatever shell it needs)
to be installed and run it as needed. Both BASH and ZSH normally source
their environment on load, thus I still would expect this file to be read
on every 'makepkg' call, because that command forks and creates a new
child
Post by Konstantin Gizdov
process with its own envir. Correct me if I am wrong.
That's correct (but I don't understand what you wanna say by the
sentence about "forks and creates a new child").
As I have said, this was only a shot in the dark. My point is (and it
may not be related to your problem, because I don't know your entire
buildscript) that MAKEFLAGS has to be exported if it should be used by a
process started from the buildscript (make in this case).
If 'make' is started in the buildscript with a commandline like
make $MAKEFLAGS other options
then it will work, but if make is expecting this variable in the
environment, then you have to export it before. (For some scripts it
looks like it's working without export, but then the export has been
done before -- it's just a flag on the variable)
Hope this helps
Uwe
Uwe Koloska
2017-03-29 20:36:05 UTC
Permalink
Hi Konstantin,

sorry for not being clear enough for you to understand what I wanted to say.

I haven't looked at your PKGBUILD but just tried to give you some hints
from where you may be able to find the cause of your trouble.

Maybe I should try again ;-)

PKGBUILD is just a bash script that is sourced by the bash-script
makepkg. So all things true for a bash script are true for PKGBUILD.

If a command started by the script is supposed to use some variable from
the environment, this variable has to be exported in the script
(anywhere in all the files that are sourced by makepkg).

So, if make is not able to pick up the variable MAKEFLAGS from the
environment, the only explanation is, that the variable is not part of
the environment, when make is started. And this is only possible if it
was never exported or the export is removed before make is started.

And this removal is only possible from the shell that runs makepkg (and
have sourced /etc/makepkg.conf and your PKGBUILD). No process started
from the script (e.g. cmake, mentioned by you) is able to manipulate the
environment of the script!

That's all. And now it's your part to use this information and search
for the reason, why MAKEFLAGS is not part of the environment, when make
is started.

In the meantime I have looked into the makepkg script. And there I
found these two interesting pieces in run_function:

# clear user-specified makeflags if requested
if check_option "makeflags" "n"; then
unset MAKEFLAGS
fi

and then

# ensure all necessary build variables are exported
export CPPFLAGS CFLAGS CXXFLAGS LDFLAGS MAKEFLAGS CHOST

So MAKEFLAGS *is* exported and so should be available in the environment
of make when started by the script -- *if not* the option "!makeflags"
is given, then it always isempty (but exported!).

So, if you don't have this option set, there is nothing in makepkg that
unsets or unexports MAKEFLAGS and you have to search in all files that
are under your control and sourced from the makepkg script.

And if you knew all this before, than sorry for bothering, but I have
had another impression.

Hope this helps
Uwe

Continue reading on narkive:
Loading...