question_id
int64
82.3k
79.7M
title_clean
stringlengths
15
158
body_clean
stringlengths
62
28.5k
full_text
stringlengths
95
28.5k
tags
stringlengths
4
80
score
int64
0
1.15k
view_count
int64
22
1.62M
answer_count
int64
0
30
link
stringlengths
58
125
16,328,624
how to update firefox on redhat via yum
I have firefox 3.0.12 on my redhat 5.8 and I'm trying to update it. But, yum update firefox does not find and new version and keeps finding only 3.0.12 I have also tried updating yum itself. I have also tried downloading firefox tgz, but I get a lot of dependency files missing. So going that route is very tedious and I'm finding it hard to download the dependent .so files. How do I update using yum or is there a .rpm for firefox that I can download and install(I did not find one on the mozilla website)
how to update firefox on redhat via yum I have firefox 3.0.12 on my redhat 5.8 and I'm trying to update it. But, yum update firefox does not find and new version and keeps finding only 3.0.12 I have also tried updating yum itself. I have also tried downloading firefox tgz, but I get a lot of dependency files missing. So going that route is very tedious and I'm finding it hard to download the dependent .so files. How do I update using yum or is there a .rpm for firefox that I can download and install(I did not find one on the mozilla website)
firefox, redhat, yum
4
10,727
2
https://stackoverflow.com/questions/16328624/how-to-update-firefox-on-redhat-via-yum
7,553,700
ImportError: No module named paramiko
I have installed "python-paramiko" and "python-pycrypto" in Red hat linux. But still when i run the sample program i get "ImportError: No module named paramiko". I checked the installed packages using below command and got confirmed. ncmdvstk:~/pdem $ rpm -qa | grep python-p python-paramiko-1.7.6-1.el3.rf python-pycrypto-2.3-1.el3.pp My sample program which give the import error: import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy()) ssh.connect('127.0.0.1', username='admin', password='admin')
ImportError: No module named paramiko I have installed "python-paramiko" and "python-pycrypto" in Red hat linux. But still when i run the sample program i get "ImportError: No module named paramiko". I checked the installed packages using below command and got confirmed. ncmdvstk:~/pdem $ rpm -qa | grep python-p python-paramiko-1.7.6-1.el3.rf python-pycrypto-2.3-1.el3.pp My sample program which give the import error: import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy( paramiko.AutoAddPolicy()) ssh.connect('127.0.0.1', username='admin', password='admin')
python, redhat, paramiko
4
24,138
1
https://stackoverflow.com/questions/7553700/importerror-no-module-named-paramiko
1,781,315
Choosing Linux for open source development
I work on Windows XP platform and use Aptana Studio, MySQL for PHP development. I want to know which Linux edition and flavor would be appropriate for my development purposes?
Choosing Linux for open source development I work on Windows XP platform and use Aptana Studio, MySQL for PHP development. I want to know which Linux edition and flavor would be appropriate for my development purposes?
php, mysql, linux, ubuntu, redhat
3
290
7
https://stackoverflow.com/questions/1781315/choosing-linux-for-open-source-development
33,690,287
How to install aws-cfn-bootstrap/cfn-init package in Redhat using CloudFormation?
I am trying to launch a instance with CloudFormation Template. Instance was started but the UserData section was not executed completely because cfn-init/aws-cfn-bootstrap package was not installed in Redhat 7 AMI. I tried installing aws-cfn-bootstrap package manually but could not install due to the conflicts with python version. Here is the UserData section of CloudFormation Template "UserData": { "Fn::Base64": { "Fn::Join": [ "\n", [ "#!/bin/bash", "set -x", "", "INSTANCE_ID=/opt/aws/bin/ec2-metadata --instance-id | cut -f2 -d' '", "REGION=/opt/aws/bin/ec2-metadata --availability-zone| cut -f2 -d' ' | sed '$s/.$//'", { "Fn::Join": [ "", [ "AID='", { "Fn::GetAtt": [ "eip", "AllocationId" ] }, "'" ] ] }, "aws ec2 associate-address --region $REGION --instance-id $INSTANCE_ID --allocation-id $AID" ] ] } } cloud-init.log Nov 12 03:55:27 localhost cloud-init: Cloud-init v. 0.7.6 running 'modules:config' at Thu, 12 Nov 2015 08:55:27 +0000. Up 19.01 seconds. Nov 12 03:55:28 localhost cloud-init: Cloud-init v. 0.7.6 running 'modules:final' at Thu, 12 Nov 2015 08:55:27 +0000. Up 19.67 seconds. Nov 12 03:55:28 localhost cloud-init: ++ /opt/aws/bin/ec2-metadata --instance-id Nov 12 03:55:28 localhost cloud-init: /var/lib/cloud/instance/scripts/part-001: line 4: /opt/aws/bin/ec2-metadata: No such file or directory Nov 12 03:55:28 localhost cloud-init: ++ cut -f2 '-d ' Nov 12 03:55:28 localhost cloud-init: + INSTANCE_ID= Nov 12 03:55:28 localhost cloud-init: ++ cut -f2 '-d ' Nov 12 03:55:28 localhost cloud-init: ++ sed '$s/.$//' Nov 12 03:55:28 localhost cloud-init: ++ /opt/aws/bin/ec2-metadata --availability-zone Nov 12 03:55:28 localhost cloud-init: /var/lib/cloud/instance/scripts/part-001: line 5: /opt/aws/bin/ec2-metadata: No such file or directory Nov 12 03:55:28 localhost cloud-init: + REGION= Nov 12 03:55:28 localhost cloud-init: + AID=eipalloc-XXXXXX Nov 12 03:55:28 localhost cloud-init: + aws ec2 associate-address --region --instance-id --allocation-id eipalloc-XXXXXX Nov 12 03:55:28 localhost cloud-init: /var/lib/cloud/instance/scripts/part-001: line 7: aws: command not found Nov 12 03:55:28 localhost cloud-init: 2015-11-12 03:55:28,078 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [127] Nov 12 03:55:28 localhost cloud-init: 2015-11-12 03:55:28,089 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts) Nov 12 03:55:28 localhost cloud-init: 2015-11-12 03:55:28,089 - util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/pyt hon2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
How to install aws-cfn-bootstrap/cfn-init package in Redhat using CloudFormation? I am trying to launch a instance with CloudFormation Template. Instance was started but the UserData section was not executed completely because cfn-init/aws-cfn-bootstrap package was not installed in Redhat 7 AMI. I tried installing aws-cfn-bootstrap package manually but could not install due to the conflicts with python version. Here is the UserData section of CloudFormation Template "UserData": { "Fn::Base64": { "Fn::Join": [ "\n", [ "#!/bin/bash", "set -x", "", "INSTANCE_ID=/opt/aws/bin/ec2-metadata --instance-id | cut -f2 -d' '", "REGION=/opt/aws/bin/ec2-metadata --availability-zone| cut -f2 -d' ' | sed '$s/.$//'", { "Fn::Join": [ "", [ "AID='", { "Fn::GetAtt": [ "eip", "AllocationId" ] }, "'" ] ] }, "aws ec2 associate-address --region $REGION --instance-id $INSTANCE_ID --allocation-id $AID" ] ] } } cloud-init.log Nov 12 03:55:27 localhost cloud-init: Cloud-init v. 0.7.6 running 'modules:config' at Thu, 12 Nov 2015 08:55:27 +0000. Up 19.01 seconds. Nov 12 03:55:28 localhost cloud-init: Cloud-init v. 0.7.6 running 'modules:final' at Thu, 12 Nov 2015 08:55:27 +0000. Up 19.67 seconds. Nov 12 03:55:28 localhost cloud-init: ++ /opt/aws/bin/ec2-metadata --instance-id Nov 12 03:55:28 localhost cloud-init: /var/lib/cloud/instance/scripts/part-001: line 4: /opt/aws/bin/ec2-metadata: No such file or directory Nov 12 03:55:28 localhost cloud-init: ++ cut -f2 '-d ' Nov 12 03:55:28 localhost cloud-init: + INSTANCE_ID= Nov 12 03:55:28 localhost cloud-init: ++ cut -f2 '-d ' Nov 12 03:55:28 localhost cloud-init: ++ sed '$s/.$//' Nov 12 03:55:28 localhost cloud-init: ++ /opt/aws/bin/ec2-metadata --availability-zone Nov 12 03:55:28 localhost cloud-init: /var/lib/cloud/instance/scripts/part-001: line 5: /opt/aws/bin/ec2-metadata: No such file or directory Nov 12 03:55:28 localhost cloud-init: + REGION= Nov 12 03:55:28 localhost cloud-init: + AID=eipalloc-XXXXXX Nov 12 03:55:28 localhost cloud-init: + aws ec2 associate-address --region --instance-id --allocation-id eipalloc-XXXXXX Nov 12 03:55:28 localhost cloud-init: /var/lib/cloud/instance/scripts/part-001: line 7: aws: command not found Nov 12 03:55:28 localhost cloud-init: 2015-11-12 03:55:28,078 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [127] Nov 12 03:55:28 localhost cloud-init: 2015-11-12 03:55:28,089 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts) Nov 12 03:55:28 localhost cloud-init: 2015-11-12 03:55:28,089 - util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/pyt hon2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
amazon-web-services, redhat, aws-cloudformation, cloud-init
3
22,310
4
https://stackoverflow.com/questions/33690287/how-to-install-aws-cfn-bootstrap-cfn-init-package-in-redhat-using-cloudformation
10,888,931
Glassfish bundle in unexpected state exception
So, basicly: there is a standalone (no cluster) new installation of Glassfish 3.1.2 on RHEL 6.2 and Java 6 without any deployed applications (really new installation). I started default domain domain1 on the server for the first time and stopped it without anything done between start/stop. When i start the domain again, t get following error: Waiting for domain1 to start ...Error starting domain domain1. The server exited prematurely with exit code 1. Before it died, it produced the following output: Launching GlassFish on Felix platform 04.06.2011 18:27:47 BundleProvisioner update INFO: Updated bundle 1 from /home/glassfisfusr/glassfish3/glassfish/modules/endorsed/jaxb-api-osgi.jar 04.06.2011 18:27:47 BundleProvisioner update INFO: Updated bundle 2 from /home/glassfisfusr/glassfish3/glassfish/modules/endorsed/javax.annotation.jar 04.06.2011 18:27:47 BundleProvisioner update INFO: Updated bundle 3 from /home/glassfisfusr/glassfish3/glassfish/modules/endorsed/webservices-api-osgi.jar 04.06.2011 18:27:47 BundleProvisioner update skipped 04.06.2011 18:27:49 BundleProvisioner update INFO: Updated bundle 319 from /home/glassfisfusr/glassfish3/glassfish/modules/autostart/osgi-ee-resources.jar 04.06.2011 18:27:49 OSGiFrameworkLauncher launchOSGiFrameWork INFO: Updating system bundle Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.glassfish.bootstrap.GlassFishMain.main(GlassFishMain.java:97) at com.sun.enterprise.glassfish.bootstrap.ASMain.main(ASMain.java:55) Caused by: org.glassfish.embeddable.GlassFishException: java.lang.IllegalStateException: Bundle in unexpected state. at com.sun.enterprise.glassfish.bootstrap.osgi.OSGiGlassFishRuntimeBuilder.build(OSGiGlassFishRuntimeBuilder.java:164) at org.glassfish.embeddable.GlassFishRuntime._bootstrap(GlassFishRuntime.java:157) at org.glassfish.embeddable.GlassFishRuntime.bootstrap(GlassFishRuntime.java:110) at com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher.launch(GlassFishMain.java:112) ... 6 more Caused by: java.lang.IllegalStateException: Bundle in unexpected state. at org.apache.felix.framework.Felix.acquireBundleLock(Felix.java:4856) at org.apache.felix.framework.Felix.start(Felix.java:809) at com.sun.enterprise.glassfish.bootstrap.osgi.OSGiGlassFishRuntimeBuilder.build(OSGiGlassFishRuntimeBuilder.java:157) ... 9 more Error stopping framework: java.lang.NullPointerException java.lang.NullPointerException at com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher$1.run(GlassFishMain.java:203)
Glassfish bundle in unexpected state exception So, basicly: there is a standalone (no cluster) new installation of Glassfish 3.1.2 on RHEL 6.2 and Java 6 without any deployed applications (really new installation). I started default domain domain1 on the server for the first time and stopped it without anything done between start/stop. When i start the domain again, t get following error: Waiting for domain1 to start ...Error starting domain domain1. The server exited prematurely with exit code 1. Before it died, it produced the following output: Launching GlassFish on Felix platform 04.06.2011 18:27:47 BundleProvisioner update INFO: Updated bundle 1 from /home/glassfisfusr/glassfish3/glassfish/modules/endorsed/jaxb-api-osgi.jar 04.06.2011 18:27:47 BundleProvisioner update INFO: Updated bundle 2 from /home/glassfisfusr/glassfish3/glassfish/modules/endorsed/javax.annotation.jar 04.06.2011 18:27:47 BundleProvisioner update INFO: Updated bundle 3 from /home/glassfisfusr/glassfish3/glassfish/modules/endorsed/webservices-api-osgi.jar 04.06.2011 18:27:47 BundleProvisioner update skipped 04.06.2011 18:27:49 BundleProvisioner update INFO: Updated bundle 319 from /home/glassfisfusr/glassfish3/glassfish/modules/autostart/osgi-ee-resources.jar 04.06.2011 18:27:49 OSGiFrameworkLauncher launchOSGiFrameWork INFO: Updating system bundle Exception in thread "main" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.enterprise.glassfish.bootstrap.GlassFishMain.main(GlassFishMain.java:97) at com.sun.enterprise.glassfish.bootstrap.ASMain.main(ASMain.java:55) Caused by: org.glassfish.embeddable.GlassFishException: java.lang.IllegalStateException: Bundle in unexpected state. at com.sun.enterprise.glassfish.bootstrap.osgi.OSGiGlassFishRuntimeBuilder.build(OSGiGlassFishRuntimeBuilder.java:164) at org.glassfish.embeddable.GlassFishRuntime._bootstrap(GlassFishRuntime.java:157) at org.glassfish.embeddable.GlassFishRuntime.bootstrap(GlassFishRuntime.java:110) at com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher.launch(GlassFishMain.java:112) ... 6 more Caused by: java.lang.IllegalStateException: Bundle in unexpected state. at org.apache.felix.framework.Felix.acquireBundleLock(Felix.java:4856) at org.apache.felix.framework.Felix.start(Felix.java:809) at com.sun.enterprise.glassfish.bootstrap.osgi.OSGiGlassFishRuntimeBuilder.build(OSGiGlassFishRuntimeBuilder.java:157) ... 9 more Error stopping framework: java.lang.NullPointerException java.lang.NullPointerException at com.sun.enterprise.glassfish.bootstrap.GlassFishMain$Launcher$1.run(GlassFishMain.java:203)
glassfish-3, redhat, java-6
3
3,809
4
https://stackoverflow.com/questions/10888931/glassfish-bundle-in-unexpected-state-exception
29,593,646
Error setting up rhc (red hat client tools)
I've installed rhc following the instructions on the Openshift website. All seems fine when I run gem install rhc and hgem update rhc but when I try to call rhc I get the following message below. I've tried reinstalling ruby and git, both 32 and 64 messages. I also thought the problem was a missing openssl but installing that made no difference. I've run out of ideas and any help would be greatly appreciated c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in requir e': cannot load such file -- dl/import (LoadError) from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/pageant.rb:1:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/agent/socket.rb:5:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/agent.rb:22:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/key_manager.rb:4:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/session.rb:4:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh. rb:11:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/ssh_hel pers.rb:18:in <top (required)>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard. rb:77:in <class:Wizard>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard. rb:7:in <module:RHC>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard. rb:6:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s/base.rb:4:in <top (required)>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s/account.rb:2:in <module:Commands>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s/account.rb:1:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s.rb:189:in block in load' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s.rb:188:in each' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s.rb:188:in load' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/cli.rb: 36:in start' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/bin/rhc:20:in <top (required)>' from c:/local/Ruby22/bin/rhc:23:in load' from c:/local/Ruby22/bin/rhc:23:in <main>'
Error setting up rhc (red hat client tools) I've installed rhc following the instructions on the Openshift website. All seems fine when I run gem install rhc and hgem update rhc but when I try to call rhc I get the following message below. I've tried reinstalling ruby and git, both 32 and 64 messages. I also thought the problem was a missing openssl but installing that made no difference. I've run out of ideas and any help would be greatly appreciated c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb:54:in requir e': cannot load such file -- dl/import (LoadError) from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/pageant.rb:1:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/agent/socket.rb:5:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/agent.rb:22:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/key_manager.rb:4:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh/ authentication/session.rb:4:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/net-ssh-2.9.2/lib/net/ssh. rb:11:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/ssh_hel pers.rb:18:in <top (required)>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard. rb:77:in <class:Wizard>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard. rb:7:in <module:RHC>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/wizard. rb:6:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s/base.rb:4:in <top (required)>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s/account.rb:2:in <module:Commands>' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s/account.rb:1:in <top (required)>' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/2.2.0/rubygems/core_ext/kernel_require.rb: 54:in require' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s.rb:189:in block in load' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s.rb:188:in each' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/command s.rb:188:in load' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/lib/rhc/cli.rb: 36:in start' from c:/local/Ruby22/lib/ruby/gems/2.2.0/gems/rhc-1.35.1/bin/rhc:20:in <top (required)>' from c:/local/Ruby22/bin/rhc:23:in load' from c:/local/Ruby22/bin/rhc:23:in <main>'
windows, openshift, redhat
3
3,986
3
https://stackoverflow.com/questions/29593646/error-setting-up-rhc-red-hat-client-tools
3,723,975
Where is &quot;.htaccess&quot; apache file located in Linux Red Hat?
Does anyone know where is the .htaccess file located after I install in linux red hat 5?
Where is &quot;.htaccess&quot; apache file located in Linux Red Hat? Does anyone know where is the .htaccess file located after I install in linux red hat 5?
apache, redhat
3
15,185
1
https://stackoverflow.com/questions/3723975/where-is-htaccess-apache-file-located-in-linux-red-hat
62,634,946
Ansible with Python3 on RedHat/CentOS 7 (python3-dnf issues)
I'd like to run Ansible tasks with python3 as the interpreter (there are lots of reasons for why to go to python3 ... one of them being that python2 will not be supported anymore by Ansible). Unfortunately, doing that on RedHat 7 is not possible as I can't install python3-dnf there (it seems this package is available only for RedHat 8). Does anyone had that issue and founded a solution for it? Thanks
Ansible with Python3 on RedHat/CentOS 7 (python3-dnf issues) I'd like to run Ansible tasks with python3 as the interpreter (there are lots of reasons for why to go to python3 ... one of them being that python2 will not be supported anymore by Ansible). Unfortunately, doing that on RedHat 7 is not possible as I can't install python3-dnf there (it seems this package is available only for RedHat 8). Does anyone had that issue and founded a solution for it? Thanks
python, ansible, redhat
3
13,181
3
https://stackoverflow.com/questions/62634946/ansible-with-python3-on-redhat-centos-7-python3-dnf-issues
19,164,153
Node Installation error
I'm trying to install node.js from source on RH and I keep running into the below error. make[1]: g++: Command not found make[1]: *** [~/node-v0.10.20/out/Release/obj.target/v8_base/deps/v8/src/accessors.o] Error 127 Relevant Details: I am attempting to install as a local user, without using the sudo command, by specifying the install path: ./configure --prefix=/path/to/node && make && make install I've checked the dependencies listed, and I do have gcc 4.4, python 2.6.6 and gmake 3.8.1. It seems the make[1] commad is: g++ '-DENABLE_DEBUGGER_SUPPORT' '-DENABLE_EXTRA_CHECKS' '-DV8_TARGET_ARCH_X64' -I../deps/v8/src -Wall -Wextra -Wno-unused-parameter -pthread -m64 -fno-strict -aliasing -O2 -fno-strict-aliasing -fno-tree-vrp -fno-tree-sink -fno-tree-vrp -fno-rtti -fno-exceptions -MMD -MF ~/node-v0.10.20/out/Release/.deps//var/opt/webdocs/wtprefork/ld/packages/node-v0.10.20/out/Release/obj.target/v8_base/deps/v8/src/accessors.o.d.raw -c -o ~/node-v0.10.20/out/Release/obj.target/v8_base/deps/v8/src/accessors.o ../deps/v8/src/accessors.cc
Node Installation error I'm trying to install node.js from source on RH and I keep running into the below error. make[1]: g++: Command not found make[1]: *** [~/node-v0.10.20/out/Release/obj.target/v8_base/deps/v8/src/accessors.o] Error 127 Relevant Details: I am attempting to install as a local user, without using the sudo command, by specifying the install path: ./configure --prefix=/path/to/node && make && make install I've checked the dependencies listed, and I do have gcc 4.4, python 2.6.6 and gmake 3.8.1. It seems the make[1] commad is: g++ '-DENABLE_DEBUGGER_SUPPORT' '-DENABLE_EXTRA_CHECKS' '-DV8_TARGET_ARCH_X64' -I../deps/v8/src -Wall -Wextra -Wno-unused-parameter -pthread -m64 -fno-strict -aliasing -O2 -fno-strict-aliasing -fno-tree-vrp -fno-tree-sink -fno-tree-vrp -fno-rtti -fno-exceptions -MMD -MF ~/node-v0.10.20/out/Release/.deps//var/opt/webdocs/wtprefork/ld/packages/node-v0.10.20/out/Release/obj.target/v8_base/deps/v8/src/accessors.o.d.raw -c -o ~/node-v0.10.20/out/Release/obj.target/v8_base/deps/v8/src/accessors.o ../deps/v8/src/accessors.cc
linux, node.js, redhat
3
1,627
2
https://stackoverflow.com/questions/19164153/node-installation-error
4,921,580
Help installing PMD Eclipse plugin
I am trying to install PMD onto my Eclipse Helios installation. I follow the usual instructions to use the 'Install New Software' feature within Eclipse. All seems to go swimmingly and the installation completes. After restarting Eclipse, the option to use PMD is not there as expected (by right-clicking on a project). Could someone advise on any steps I may have missed? Eclipse is the 20100617-1415 version installed on Red Hat running KDE. Any guidance will be appreciated. Thanks
Help installing PMD Eclipse plugin I am trying to install PMD onto my Eclipse Helios installation. I follow the usual instructions to use the 'Install New Software' feature within Eclipse. All seems to go swimmingly and the installation completes. After restarting Eclipse, the option to use PMD is not there as expected (by right-clicking on a project). Could someone advise on any steps I may have missed? Eclipse is the 20100617-1415 version installed on Red Hat running KDE. Any guidance will be appreciated. Thanks
java, eclipse, eclipse-plugin, redhat, pmd
3
13,725
4
https://stackoverflow.com/questions/4921580/help-installing-pmd-eclipse-plugin
3,895,022
Variable initialisation not happening everywhere on certain platforms
I have a program that I built for RHEL5 32 bit and ubuntu10 64 bit (c++ qt4.6). When I run the program on ubuntu, all the variables are initialized without me needing to code this initialization. But When I run the program on RHEL, some of the variables are not initialized, I have noticed that they are mostly integer type and the typicial values are around 154280152. The funny thing is that it just happens in a few classes. How can this be? update: here is a snippet of code, it is the header of one of the classes where this is happening(sorry for the layout I am looking into that right now): #ifndef FCP_CONFIG_H #define FCP_CONFIG_H #include "ui_fcpConfig.h" #include #include "fpsengine.h" #include "fcp_cfg_delegate.h" #define SET_COL 3 #define GLOBAL_KEY_COL 2 #define LOCAL_KEY_COL 1 #define ENABLE_COL 0 namespace Ui { class fcpConfig; } class fcpConfig : public QWidget { Q_OBJECT public: fcpConfig(QWidget *parent, FPSengine * FPS); Ui::fcpConfigForm ui; void setupFcpCfg(); private: QWidget * myParent; FPSengine * myFPS; fcpCfgDelegate delegate; QList<QSpinBox*>failOrderList; QList<QRadioButton*>primaryList; int numFCP; QList<int>numFcpInEachSet; int currentSet; void updateSets(); void refreshFailorderDuringUserEdit(int fcpPos); QSignalMapper * signalMapper; QMutex mutex; void sendSysStatusMsgAndPopup(QString msg); int curSet; //the connected Fcp's Set private slots: void updateFcpFailOrderSpinBox(int absPos); void on_twFCP_cellClicked( int row, int column ); void on_buttonBox_clicked(QAbstractButton* button); private: template <class T> void buildObjList(QObject * location,QList<T> *cmdEleList,QString objName, int numObj){ T pCmdEle; cmdEleList->clear(); for(int i=0;i<numObj;i++){ pCmdEle = location->findChild<T>(objName+QString("%1").arg(i+1)); cmdEleList->append(pCmdEle); } } //used to send SysStatus and popuMsg when number of active Fcps in Set not 1 QString activeList; //build a string representing Fcp numbers that are active. int iNumActive; }; #endif // FCP_CONFIG_H
Variable initialisation not happening everywhere on certain platforms I have a program that I built for RHEL5 32 bit and ubuntu10 64 bit (c++ qt4.6). When I run the program on ubuntu, all the variables are initialized without me needing to code this initialization. But When I run the program on RHEL, some of the variables are not initialized, I have noticed that they are mostly integer type and the typicial values are around 154280152. The funny thing is that it just happens in a few classes. How can this be? update: here is a snippet of code, it is the header of one of the classes where this is happening(sorry for the layout I am looking into that right now): #ifndef FCP_CONFIG_H #define FCP_CONFIG_H #include "ui_fcpConfig.h" #include #include "fpsengine.h" #include "fcp_cfg_delegate.h" #define SET_COL 3 #define GLOBAL_KEY_COL 2 #define LOCAL_KEY_COL 1 #define ENABLE_COL 0 namespace Ui { class fcpConfig; } class fcpConfig : public QWidget { Q_OBJECT public: fcpConfig(QWidget *parent, FPSengine * FPS); Ui::fcpConfigForm ui; void setupFcpCfg(); private: QWidget * myParent; FPSengine * myFPS; fcpCfgDelegate delegate; QList<QSpinBox*>failOrderList; QList<QRadioButton*>primaryList; int numFCP; QList<int>numFcpInEachSet; int currentSet; void updateSets(); void refreshFailorderDuringUserEdit(int fcpPos); QSignalMapper * signalMapper; QMutex mutex; void sendSysStatusMsgAndPopup(QString msg); int curSet; //the connected Fcp's Set private slots: void updateFcpFailOrderSpinBox(int absPos); void on_twFCP_cellClicked( int row, int column ); void on_buttonBox_clicked(QAbstractButton* button); private: template <class T> void buildObjList(QObject * location,QList<T> *cmdEleList,QString objName, int numObj){ T pCmdEle; cmdEleList->clear(); for(int i=0;i<numObj;i++){ pCmdEle = location->findChild<T>(objName+QString("%1").arg(i+1)); cmdEleList->append(pCmdEle); } } //used to send SysStatus and popuMsg when number of active Fcps in Set not 1 QString activeList; //build a string representing Fcp numbers that are active. int iNumActive; }; #endif // FCP_CONFIG_H
c++, qt, gcc, ubuntu, redhat
3
137
1
https://stackoverflow.com/questions/3895022/variable-initialisation-not-happening-everywhere-on-certain-platforms
47,104,454
OpenShift Online v3+ - Adding new route gives forbidden error
I successfully installed on OpenShift Online (Starter Plan, US Virginia server) a Java Web Application (with MySQL Persistent service). The runtime is Tomcat. I managed to build the WAR from an external Git repository (via ssh authentication moreover) and deploy the app to the container. It is reachable from autogenerated route, but i'm getting a weird error setting up a custom new one linked to a www domain. Here's the message: Route is invalid: spec.host: Forbidden: you do not have permission to set the host field of the route. I suppose that's a platform bug, but don't know how to work it out (if possible). Any idea please? Many thanks in advance.
OpenShift Online v3+ - Adding new route gives forbidden error I successfully installed on OpenShift Online (Starter Plan, US Virginia server) a Java Web Application (with MySQL Persistent service). The runtime is Tomcat. I managed to build the WAR from an external Git repository (via ssh authentication moreover) and deploy the app to the container. It is reachable from autogenerated route, but i'm getting a weird error setting up a custom new one linked to a www domain. Here's the message: Route is invalid: spec.host: Forbidden: you do not have permission to set the host field of the route. I suppose that's a platform bug, but don't know how to work it out (if possible). Any idea please? Many thanks in advance.
routes, openshift, redhat
3
2,542
2
https://stackoverflow.com/questions/47104454/openshift-online-v3-adding-new-route-gives-forbidden-error
25,142,492
How to find the RAM size in Red Hat Linux Server?
I am trying to find the command to show the installed memory (RAM) in Red Hat Enterprise Linux Server 6.5. I have found the following command: cat /proc/meminfo | grep MemTotal But it looks like the MemTotal value is not the actual RAM value. I want to know the real RAM of the system (similar to Installed memory(RAM) in Windows). Thanks for your help.
How to find the RAM size in Red Hat Linux Server? I am trying to find the command to show the installed memory (RAM) in Red Hat Enterprise Linux Server 6.5. I have found the following command: cat /proc/meminfo | grep MemTotal But it looks like the MemTotal value is not the actual RAM value. I want to know the real RAM of the system (similar to Installed memory(RAM) in Windows). Thanks for your help.
linux, bash, shell, redhat
3
16,904
1
https://stackoverflow.com/questions/25142492/how-to-find-the-ram-size-in-red-hat-linux-server
10,110,619
RedHat yum subversion installation
I am trying to install subversion on RedHat linux. But there is a bit problem with broken yum package manager. I have configured some own repositories from CentOS, but unfortunately there is still one broken dependency: libneon.so.27 I have tried to download it on my own, but its dependencies are quite complex, it will cost me a lot of time to downlaod them all. Do you have any hints? (Links to some repos with that libneon (rpmforge i have tried with no success))
RedHat yum subversion installation I am trying to install subversion on RedHat linux. But there is a bit problem with broken yum package manager. I have configured some own repositories from CentOS, but unfortunately there is still one broken dependency: libneon.so.27 I have tried to download it on my own, but its dependencies are quite complex, it will cost me a lot of time to downlaod them all. Do you have any hints? (Links to some repos with that libneon (rpmforge i have tried with no success))
linux, svn, centos, redhat, yum
3
14,735
1
https://stackoverflow.com/questions/10110619/redhat-yum-subversion-installation
69,178,134
How to find the count of and total sizes of multiple files in directory?
I have a directory, inside it multiple directories which contains many type of files. I want to find *.jpg files then to get the count and total size of all individual one. I know I have to use find wc -l and du -ch but I don't know how to combine them in a single script or in a single command. find . -type f name "*.jpg" -exec - not sure how to connect all the three
How to find the count of and total sizes of multiple files in directory? I have a directory, inside it multiple directories which contains many type of files. I want to find *.jpg files then to get the count and total size of all individual one. I know I have to use find wc -l and du -ch but I don't know how to combine them in a single script or in a single command. find . -type f name "*.jpg" -exec - not sure how to connect all the three
linux, bash, shell, sh, redhat
3
3,648
1
https://stackoverflow.com/questions/69178134/how-to-find-the-count-of-and-total-sizes-of-multiple-files-in-directory
58,755,754
How to cleanup docker containers and images on linux machines
We have Linux redhat machine with docker and docker compose Now we want to clean all containers and images - like we have scratch new docker As I understand to get that , we need to perform the following procedure with this order: Am I right with this procedure? , or I missing something? 1. docker stop <CONTAINER ID> 2. docker container rm <CONTAINER ID> 3. docker image rm <IMAGE ID> example first find - CONTAINER ID docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 654fa81f4439 confluentinc/cp-enterprise-control-center:5.0.0 "/etc/confluent/dock…" 9 minutes ago Up 9 minutes 0.0.0.0:9021->9021/tcp control-center 1) stop container docker stop 654fa81f4439 654fa81f4439 2) delete container docker container rm 654fa81f4439 654fa81f4439 3) find image ID docker images REPOSITORY TAG IMAGE ID CREATED SIZE confluentinc/cp-enterprise-control-center 5.0.0 e0bd9a5edb95 15 months ago 617MB delete image ocker image rm e0bd9a5edb95 Untagged: confluentinc/cp-enterprise-control-center:5.0.0 Untagged: confluentinc/cp-enterprise-control-center@sha256:2e406ff8c6b1b8be6bf01ccdf68b14be0f0759db27c050dddce4b02ee0894127 Deleted: sha256:e0bd9a5edb9510a326934fa1a80a4875ab981c5007354de28f53bfb3e11bc34a Deleted: sha256:c23255297f6d75f156baf963786d3ded1d045b726d74ed59c258dc8209bac078 Deleted: sha256:6cab492e72ca2578897b7ceecb196e728671158e262957f3c01e53fd42f6f8b4
How to cleanup docker containers and images on linux machines We have Linux redhat machine with docker and docker compose Now we want to clean all containers and images - like we have scratch new docker As I understand to get that , we need to perform the following procedure with this order: Am I right with this procedure? , or I missing something? 1. docker stop <CONTAINER ID> 2. docker container rm <CONTAINER ID> 3. docker image rm <IMAGE ID> example first find - CONTAINER ID docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 654fa81f4439 confluentinc/cp-enterprise-control-center:5.0.0 "/etc/confluent/dock…" 9 minutes ago Up 9 minutes 0.0.0.0:9021->9021/tcp control-center 1) stop container docker stop 654fa81f4439 654fa81f4439 2) delete container docker container rm 654fa81f4439 654fa81f4439 3) find image ID docker images REPOSITORY TAG IMAGE ID CREATED SIZE confluentinc/cp-enterprise-control-center 5.0.0 e0bd9a5edb95 15 months ago 617MB delete image ocker image rm e0bd9a5edb95 Untagged: confluentinc/cp-enterprise-control-center:5.0.0 Untagged: confluentinc/cp-enterprise-control-center@sha256:2e406ff8c6b1b8be6bf01ccdf68b14be0f0759db27c050dddce4b02ee0894127 Deleted: sha256:e0bd9a5edb9510a326934fa1a80a4875ab981c5007354de28f53bfb3e11bc34a Deleted: sha256:c23255297f6d75f156baf963786d3ded1d045b726d74ed59c258dc8209bac078 Deleted: sha256:6cab492e72ca2578897b7ceecb196e728671158e262957f3c01e53fd42f6f8b4
docker, docker-compose, containers, redhat, docker-machine
3
1,942
1
https://stackoverflow.com/questions/58755754/how-to-cleanup-docker-containers-and-images-on-linux-machines
26,805,940
What is libXinerama?
Can someone explain what it is in plain words? I got some installation which required libXinerama. Eventually I got it working but I would like to know what it does and how it interacts with the rest of the system. Thank you,
What is libXinerama? Can someone explain what it is in plain words? I got some installation which required libXinerama. Eventually I got it working but I would like to know what it does and how it interacts with the rest of the system. Thank you,
linux, linux-kernel, redhat
3
6,123
1
https://stackoverflow.com/questions/26805940/what-is-libxinerama
13,146,852
R 2.15 install in Redhat
I am trying to install a local copy of R on a server without admin privileges. I know almost nothing about servers, or linux. I can easily access a copy of R by typing "R", however the server I am working off of only has an old copy of R (v2.10.1). I need at least v2.14 to run my scripts, although I would prefer to install the most recent release. As far as I understand, my server runs Redhat e15 x86_64 GNU/Linux. I have the server mapped to my windows computer, and tried doing a regular install of Windows R onto the server, but when I try and run the R.exe file I get an error stating I "cannot execute binary file". I found on the CRAN website what I think I should download: Under the linux installation... redhat/e15/x86_64 But the folder only contains v2.10. I found this thread about installing R on Redhat, but I am still at a loss for how (if possible) to install/build my own copy of R.
R 2.15 install in Redhat I am trying to install a local copy of R on a server without admin privileges. I know almost nothing about servers, or linux. I can easily access a copy of R by typing "R", however the server I am working off of only has an old copy of R (v2.10.1). I need at least v2.14 to run my scripts, although I would prefer to install the most recent release. As far as I understand, my server runs Redhat e15 x86_64 GNU/Linux. I have the server mapped to my windows computer, and tried doing a regular install of Windows R onto the server, but when I try and run the R.exe file I get an error stating I "cannot execute binary file". I found on the CRAN website what I think I should download: Under the linux installation... redhat/e15/x86_64 But the folder only contains v2.10. I found this thread about installing R on Redhat, but I am still at a loss for how (if possible) to install/build my own copy of R.
linux, r, installation, redhat
3
10,006
1
https://stackoverflow.com/questions/13146852/r-2-15-install-in-redhat
9,948,894
/bin/bash giving a segmentation fault upon startup
I am getting a segmentation fault from bash when I try to SSH to a remote server (running RHEL 4.4.5-6). After providing my credentials, the SSH client spits back the "Last login: ..." information, and then just hangs. Out of curiosity, I pressed Ctrl-C and was able to get to a bash prompt. However, it's not the "usual" prompt that I see (it usually has my username, the server hostname, etc). login as: xxxxxxx xxxxx@xxxx's password: Last login: Fri Mar 30 14:33:41 2012 from xxx.xx.xx.xxx -bash-4.1$ echo $0 -bash -bash-4.1$ I tried to run /bin/bash from GDB. After a medium-sized wait time, I finally got a SIGSEGV error: (gdb) run Starting program: /bin/bash Program received signal SIGSEGV, Segmentation fault. 0x08067ab5 in yyparse () (gdb) The last (significant) changes that I've made to the system was installing GNU screen (using yum install screen). Screen seemed to hang as well when I tried to start it (I'm assuming because it tried running bash, and got the same segfault). Edit: I tried running rpm -V: -bash-4.1$ rpm -V bash -bash-4.1$ Here are my .bash* files: .bashrc: # .bashrc # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs .bash_profile: # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs .bash_logout: # ~/.bash_logout .bash_history is quite long. I erased it, tried logging in again, and got the same results.
/bin/bash giving a segmentation fault upon startup I am getting a segmentation fault from bash when I try to SSH to a remote server (running RHEL 4.4.5-6). After providing my credentials, the SSH client spits back the "Last login: ..." information, and then just hangs. Out of curiosity, I pressed Ctrl-C and was able to get to a bash prompt. However, it's not the "usual" prompt that I see (it usually has my username, the server hostname, etc). login as: xxxxxxx xxxxx@xxxx's password: Last login: Fri Mar 30 14:33:41 2012 from xxx.xx.xx.xxx -bash-4.1$ echo $0 -bash -bash-4.1$ I tried to run /bin/bash from GDB. After a medium-sized wait time, I finally got a SIGSEGV error: (gdb) run Starting program: /bin/bash Program received signal SIGSEGV, Segmentation fault. 0x08067ab5 in yyparse () (gdb) The last (significant) changes that I've made to the system was installing GNU screen (using yum install screen). Screen seemed to hang as well when I tried to start it (I'm assuming because it tried running bash, and got the same segfault). Edit: I tried running rpm -V: -bash-4.1$ rpm -V bash -bash-4.1$ Here are my .bash* files: .bashrc: # .bashrc # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs .bash_profile: # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs .bash_logout: # ~/.bash_logout .bash_history is quite long. I erased it, tried logging in again, and got the same results.
linux, bash, ssh, segmentation-fault, redhat
3
6,600
1
https://stackoverflow.com/questions/9948894/bin-bash-giving-a-segmentation-fault-upon-startup
3,803,679
Changing temporary file folder location in linux (for everything on the system)?
Currently its /tmp How can I set it to /anythingelse so that all applications use that subsequently?
Changing temporary file folder location in linux (for everything on the system)? Currently its /tmp How can I set it to /anythingelse so that all applications use that subsequently?
linux, redhat, temporary-directory
3
23,405
3
https://stackoverflow.com/questions/3803679/changing-temporary-file-folder-location-in-linux-for-everything-on-the-system
75,858,287
extracting only hostname from a file
I am trying to extract only the hostname and command status in that hostname but not able to find useful grep options to do this. The file format is as follows server1 bash: adinfo: command not found I am trying to use grep/sed in a way that it only gives me the hostname and the command output of adinfo command. So for this what I have tried more file3 |grep 'com.*nd' bash: adinfo: command not found bash: adinfo: command not found So I partially get the output but not the server name listed above. Ideally it should list server name and that command output. Any help is appreciated. more file3 |grep 'com.*nd' So to clarify, I have a big file that contains this content: server1 Local host name: server1 Joined to domain: domain.com Joined as: server1.domain.com Pre-win2K name: server1 Current DC: domain.com Preferred site: Datacenter Zone: servers Zone Last password set: 2022-03-11 02:23:55 EST CentrifyDC mode: connected Licensed Features: Enabled server2 bash: adinfo: command not found and I only want to see where 'adinfo' command cannot be ran, along with the server name. I can narrow down to # more file3 |grep -e 'Loc.*me' -e 'Cen.*de' Local host name: server1 CentrifyDC mode: connected but not the other way around where I can only see the server where the adinfo doesn't work.
extracting only hostname from a file I am trying to extract only the hostname and command status in that hostname but not able to find useful grep options to do this. The file format is as follows server1 bash: adinfo: command not found I am trying to use grep/sed in a way that it only gives me the hostname and the command output of adinfo command. So for this what I have tried more file3 |grep 'com.*nd' bash: adinfo: command not found bash: adinfo: command not found So I partially get the output but not the server name listed above. Ideally it should list server name and that command output. Any help is appreciated. more file3 |grep 'com.*nd' So to clarify, I have a big file that contains this content: server1 Local host name: server1 Joined to domain: domain.com Joined as: server1.domain.com Pre-win2K name: server1 Current DC: domain.com Preferred site: Datacenter Zone: servers Zone Last password set: 2022-03-11 02:23:55 EST CentrifyDC mode: connected Licensed Features: Enabled server2 bash: adinfo: command not found and I only want to see where 'adinfo' command cannot be ran, along with the server name. I can narrow down to # more file3 |grep -e 'Loc.*me' -e 'Cen.*de' Local host name: server1 CentrifyDC mode: connected but not the other way around where I can only see the server where the adinfo doesn't work.
linux, awk, grep, redhat
3
269
6
https://stackoverflow.com/questions/75858287/extracting-only-hostname-from-a-file
72,366,553
OpenJDK remedy for CVE-2022-21496 yields &quot;unsupported authority&quot; exception thrown
After a recent RedHat OpenJDK update, an application is logging the following exception on startup while trying to process its configuration properties: javax.naming.NamingException: Cannot parse url: ldap://dev_ldap.example.com:389 [Root exception is java.net.MalformedURLException: unsupported authority: dev_ldap.example.com:389] What is meant by "unsupported authority"? A related RedHat article suggests only to "avoid special characters", but we don't appear to be using any. (Unless it is the underscore that is considered "special"?)
OpenJDK remedy for CVE-2022-21496 yields &quot;unsupported authority&quot; exception thrown After a recent RedHat OpenJDK update, an application is logging the following exception on startup while trying to process its configuration properties: javax.naming.NamingException: Cannot parse url: ldap://dev_ldap.example.com:389 [Root exception is java.net.MalformedURLException: unsupported authority: dev_ldap.example.com:389] What is meant by "unsupported authority"? A related RedHat article suggests only to "avoid special characters", but we don't appear to be using any. (Unless it is the underscore that is considered "special"?)
java, security, ldap, redhat
3
3,093
2
https://stackoverflow.com/questions/72366553/openjdk-remedy-for-cve-2022-21496-yields-unsupported-authority-exception-throw
58,381,198
How do I install tkinter on RedHat?
I am trying to install tkinter on Redhat 7.7. I have tried every combination if "sudo yum install [whatever]" and every single time it comes up with "No package [whatever] available". pip install tkinter pip3 install tkinter sudo yum install python3-tkinter sudo yum install tkinter sudo yum install python36-tkinter sudo yum -y install python36u-tkinter sudo yum -y install python36-tkinter sudo yum install tkinter sudo yum install python36-tkinter sudo yum install python35-tkinter.x86_64 ...etc I have tried to find what repository I might need to enable but RedHat support is all behind a pay wall. What repository do I need to enable? At this point I am actually considering just switching to Ubuntu as RedHat is giving me all sorts of problems. EDIT: I tried yum search tkinter and got the following: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- manager Repo rhel-7-workstation-rpms forced skip_if_unavailable=True due to: /etc/pki/entitlement/4690243650278863397-key.pem ====================== Matched:tkinter========================== python3.x86_64 : Interpreter of the Python programming language I already have python3 installed. I don't know if had I installed via sudo yum install python3.x86_64 vs sudo yum install python3 I would have got different results.
How do I install tkinter on RedHat? I am trying to install tkinter on Redhat 7.7. I have tried every combination if "sudo yum install [whatever]" and every single time it comes up with "No package [whatever] available". pip install tkinter pip3 install tkinter sudo yum install python3-tkinter sudo yum install tkinter sudo yum install python36-tkinter sudo yum -y install python36u-tkinter sudo yum -y install python36-tkinter sudo yum install tkinter sudo yum install python36-tkinter sudo yum install python35-tkinter.x86_64 ...etc I have tried to find what repository I might need to enable but RedHat support is all behind a pay wall. What repository do I need to enable? At this point I am actually considering just switching to Ubuntu as RedHat is giving me all sorts of problems. EDIT: I tried yum search tkinter and got the following: Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- manager Repo rhel-7-workstation-rpms forced skip_if_unavailable=True due to: /etc/pki/entitlement/4690243650278863397-key.pem ====================== Matched:tkinter========================== python3.x86_64 : Interpreter of the Python programming language I already have python3 installed. I don't know if had I installed via sudo yum install python3.x86_64 vs sudo yum install python3 I would have got different results.
python, tkinter, redhat
3
10,804
2
https://stackoverflow.com/questions/58381198/how-do-i-install-tkinter-on-redhat
52,256,181
How long does OpenShift Starter account provisioning take?
For about a week now, it says: Queued for provisioning Due to an increase in OpenShift Online Starter popularity, please expect a longer delay in account provisioning. You will receive an email when there is enough capacity to add your account. Thank you for your patience!
How long does OpenShift Starter account provisioning take? For about a week now, it says: Queued for provisioning Due to an increase in OpenShift Online Starter popularity, please expect a longer delay in account provisioning. You will receive an email when there is enough capacity to add your account. Thank you for your patience!
openshift, redhat
3
874
1
https://stackoverflow.com/questions/52256181/how-long-does-openshift-starter-account-provisioning-take
27,917,775
linux DD-MON-YY format in bash script
Is there any way to store DD-MON-YY ( for eg: 13-JAN-15) format date in to a variable I know datevar=$(date '+%d-%m-%y') echo $datevar will display 13-01-15 Is there any way to display 13-JAN-15
linux DD-MON-YY format in bash script Is there any way to store DD-MON-YY ( for eg: 13-JAN-15) format date in to a variable I know datevar=$(date '+%d-%m-%y') echo $datevar will display 13-01-15 Is there any way to display 13-JAN-15
linux, unix, redhat
3
4,451
1
https://stackoverflow.com/questions/27917775/linux-dd-mon-yy-format-in-bash-script
18,010,989
Berkeley DB mismatch error while configuring LDAP
I'm configuring OPENLDAP 2.4.35. on Redhat Linux, I have already installed Berkley DB 4.8.30 as a prerequisite. I also checked the version compatibility from OPENLDAP's README file, which says: SLAPD: BDB and HDB backends require Oracle Berkeley DB 4.4 - 4.8, or 5.0 - 5.1. It is highly recommended to apply the patches from Oracle for a given release. Still I'm getting this error: checking db.h usability... yes checking db.h presence... yes checking for db.h... yes checking for Berkeley DB major version in db.h... 4 checking for Berkeley DB minor version in db.h... 8 checking if Berkeley DB version supported by BDB/HDB backends... yes checking for Berkeley DB link (-ldb-4.8)... yes *checking for Berkeley DB library and header version match... no configure: error: Berkeley DB version mismatch* Kindly help
Berkeley DB mismatch error while configuring LDAP I'm configuring OPENLDAP 2.4.35. on Redhat Linux, I have already installed Berkley DB 4.8.30 as a prerequisite. I also checked the version compatibility from OPENLDAP's README file, which says: SLAPD: BDB and HDB backends require Oracle Berkeley DB 4.4 - 4.8, or 5.0 - 5.1. It is highly recommended to apply the patches from Oracle for a given release. Still I'm getting this error: checking db.h usability... yes checking db.h presence... yes checking for db.h... yes checking for Berkeley DB major version in db.h... 4 checking for Berkeley DB minor version in db.h... 8 checking if Berkeley DB version supported by BDB/HDB backends... yes checking for Berkeley DB link (-ldb-4.8)... yes *checking for Berkeley DB library and header version match... no configure: error: Berkeley DB version mismatch* Kindly help
linux, redhat, openldap, berkeley-db
3
7,513
1
https://stackoverflow.com/questions/18010989/berkeley-db-mismatch-error-while-configuring-ldap
2,480,410
How can I update fontconfig to a newer version in Red Hat 5.3?
I want to update fontconfig to a newer version but it seems that the OS is still finding the old fontconfig and I need the newer version to build qt. How do I make Red Hat 5.3 see the newer version? I do not know if this helps but when I did a search for fontconfig I found some files in a folder called cache. When I do yum update it tells me everything is up to date but that version is too old and is missing FcFreeTypeQueryFace. Just send me a comment if this is wrong site and ill change it.
How can I update fontconfig to a newer version in Red Hat 5.3? I want to update fontconfig to a newer version but it seems that the OS is still finding the old fontconfig and I need the newer version to build qt. How do I make Red Hat 5.3 see the newer version? I do not know if this helps but when I did a search for fontconfig I found some files in a folder called cache. When I do yum update it tells me everything is up to date but that version is too old and is missing FcFreeTypeQueryFace. Just send me a comment if this is wrong site and ill change it.
qt, fonts, redhat
3
5,329
2
https://stackoverflow.com/questions/2480410/how-can-i-update-fontconfig-to-a-newer-version-in-red-hat-5-3
64,240,264
Dependency error when install php-mbstring module on RedHat 7.9 and php 7.2
On centos (7.6), I've a script to deploy a set of php dependencies with yum tools and remi-repo. I need to migrate this installation set on a redhat (7.9). On this distribution, I've this issue during installation of php-mbdstring module. The lib libonig.so.105()(64bit) is missing. I don't found anything to fix this dependencies clearly. I've tried to install the oniguruma( and -devel) but the lib (/usr/lib64/libonig.so.5) version doesn't match with the dependency requirement. Here the output of the yum install command. ---> Package php-mbstring.x86_64 0:7.2.34-1.el7.remi will be installed Checking deps for php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('php-common(x86-64)', 'EQ', ('0', '7.2.34', '1.el7.remi')) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('libc.so.6(GLIBC_2.14)(64bit)', None, (None, None, None)) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('rtld(GNU_HASH)', None, (None, None, None)) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('libpthread.so.0()(64bit)', None, (None, None, None)) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('libonig.so.105()(64bit)', None, (None, None, None)) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u php-mbstring-7.2.34-1.el7.remi.x86_64 requires: libonig.so.105()(64bit) --> Processing Dependency: libonig.so.105()(64bit) for package: php-mbstring-7.2.34-1.el7.remi.x86_64 Searching pkgSack for dep: libonig.so.105()(64bit) --> Finished Dependency Resolution Dependency Process ending Error: Package: php-mbstring-7.2.34-1.el7.remi.x86_64 (remi-repo) Requires: libonig.so.105()(64bit)
Dependency error when install php-mbstring module on RedHat 7.9 and php 7.2 On centos (7.6), I've a script to deploy a set of php dependencies with yum tools and remi-repo. I need to migrate this installation set on a redhat (7.9). On this distribution, I've this issue during installation of php-mbdstring module. The lib libonig.so.105()(64bit) is missing. I don't found anything to fix this dependencies clearly. I've tried to install the oniguruma( and -devel) but the lib (/usr/lib64/libonig.so.5) version doesn't match with the dependency requirement. Here the output of the yum install command. ---> Package php-mbstring.x86_64 0:7.2.34-1.el7.remi will be installed Checking deps for php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('php-common(x86-64)', 'EQ', ('0', '7.2.34', '1.el7.remi')) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('libc.so.6(GLIBC_2.14)(64bit)', None, (None, None, None)) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('rtld(GNU_HASH)', None, (None, None, None)) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('libpthread.so.0()(64bit)', None, (None, None, None)) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u looking for ('libonig.so.105()(64bit)', None, (None, None, None)) as a requirement of php-mbstring.x86_64 0:7.2.34-1.el7.remi - u php-mbstring-7.2.34-1.el7.remi.x86_64 requires: libonig.so.105()(64bit) --> Processing Dependency: libonig.so.105()(64bit) for package: php-mbstring-7.2.34-1.el7.remi.x86_64 Searching pkgSack for dep: libonig.so.105()(64bit) --> Finished Dependency Resolution Dependency Process ending Error: Package: php-mbstring-7.2.34-1.el7.remi.x86_64 (remi-repo) Requires: libonig.so.105()(64bit)
php, redhat, rpm, yum, mbstring
3
8,066
1
https://stackoverflow.com/questions/64240264/dependency-error-when-install-php-mbstring-module-on-redhat-7-9-and-php-7-2
38,785,961
Exporting a Path Variable inside a shell script
I have created a script to download Terraform onto my server and install it. #!/bin/bash wget [URL] unzip terraform_0.7.0_linux_amd64.zip echo "export PATH=$PATH:/root/terraform_dir" >> /root/.bash_profile source /root/.bash_profile terraform --version This code is working perfectly. But once the script is completed and comes out, the .bash_profile file is back at its original state. i.e the path variable is not updated. echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin when I give terraform --version outside the shell script it is not working fine. But when I give su - , and then try terraform --version it is actually working fine. Is there any work around for it or automated script for it to the update the .bash_profile . I don't want to restart my session every time I update the .bash_profile ?
Exporting a Path Variable inside a shell script I have created a script to download Terraform onto my server and install it. #!/bin/bash wget [URL] unzip terraform_0.7.0_linux_amd64.zip echo "export PATH=$PATH:/root/terraform_dir" >> /root/.bash_profile source /root/.bash_profile terraform --version This code is working perfectly. But once the script is completed and comes out, the .bash_profile file is back at its original state. i.e the path variable is not updated. echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin when I give terraform --version outside the shell script it is not working fine. But when I give su - , and then try terraform --version it is actually working fine. Is there any work around for it or automated script for it to the update the .bash_profile . I don't want to restart my session every time I update the .bash_profile ?
linux, shell, redhat
3
5,954
1
https://stackoverflow.com/questions/38785961/exporting-a-path-variable-inside-a-shell-script
35,445,516
What is the proper way to stop hiveserver2?
I've installed hive 0.14 on top of hadoop 2.6.0. The setup mainly involved just extracting the tar.bin file. I followed this guide to do the setup. [URL] I start hiveserver2 with a command line: ( $HIVE_HOME/bin/hiveserver2 &> hiveserver.log & ) Now, I am wondering what is the proper to stop hiveserver2. I can kill it but I doubt that provides a graceful exit.
What is the proper way to stop hiveserver2? I've installed hive 0.14 on top of hadoop 2.6.0. The setup mainly involved just extracting the tar.bin file. I followed this guide to do the setup. [URL] I start hiveserver2 with a command line: ( $HIVE_HOME/bin/hiveserver2 &> hiveserver.log & ) Now, I am wondering what is the proper to stop hiveserver2. I can kill it but I doubt that provides a graceful exit.
java, linux, hadoop, hive, redhat
3
8,168
2
https://stackoverflow.com/questions/35445516/what-is-the-proper-way-to-stop-hiveserver2
34,191,560
Clear log files on OpenShift - RedHat
Debugging my apps on OpenShift is becoming difficult due to excessive log data. I'm using the terminal command rhc tail -a appname to view logs Is there a way to clear the log files via a rhc command? (or any other method) Any other recommendations for viewing / handling log data on OpenShift?
Clear log files on OpenShift - RedHat Debugging my apps on OpenShift is becoming difficult due to excessive log data. I'm using the terminal command rhc tail -a appname to view logs Is there a way to clear the log files via a rhc command? (or any other method) Any other recommendations for viewing / handling log data on OpenShift?
logging, openshift, redhat, paas
3
4,015
2
https://stackoverflow.com/questions/34191560/clear-log-files-on-openshift-redhat
28,906,525
Removing part of the file names
I have a bunch of files like this: file123.txt.452 file456.txt.098 file789.txt.078 How can I remove the second dot and the numbers at the end from the file names? I tried using rename but I don’t think my regex is correct: rename 's/txt.*/txt/' *
Removing part of the file names I have a bunch of files like this: file123.txt.452 file456.txt.098 file789.txt.078 How can I remove the second dot and the numbers at the end from the file names? I tried using rename but I don’t think my regex is correct: rename 's/txt.*/txt/' *
regex, linux, bash, redhat, file-rename
3
477
3
https://stackoverflow.com/questions/28906525/removing-part-of-the-file-names
25,504,087
Installing rpy on redhat
I'm new to RedHat but have been using Ubuntu for a while. I'm trying to install rpy2 using pip install rpy2 and I get the error /usr/include/features.h:164:1: warning: this is the location of the previous definition ./rpy/rinterface/_rinterface.c:86:31: error: readline/readline.h: No such file or directory In file included from ./rpy/rinterface/_rinterface.c:122: ./rpy/rinterface/embeddedr.c: In function ‘SexpObject_CObject_destroy’: ./rpy/rinterface/embeddedr.c:68: warning: implicit declaration of function ‘PyCapsule_GetPointer’ ./rpy/rinterface/embeddedr.c:69: warning: cast to pointer from integer of different size ./rpy/rinterface/embeddedr.c: In function ‘Rpy_PreserveObject’: ./rpy/rinterface/embeddedr.c:107: warning: implicit declaration of function ‘PyCapsule_New’ ./rpy/rinterface/embeddedr.c:109: warning: assignment makes pointer from integer without a cast ./rpy/rinterface/embeddedr.c:122: warning: cast to pointer from integer of different size ./rpy/rinterface/embeddedr.c: In function ‘Rpy_ReleaseObject’: ./rpy/rinterface/embeddedr.c:178: warning: cast to pointer from integer of different size ./rpy/rinterface/embeddedr.c: In function ‘Rpy_ProtectedIDs’: ./rpy/rinterface/embeddedr.c:301: warning: cast to pointer from integer of different size In file included from ./rpy/rinterface/_rinterface.c:125: ./rpy/rinterface/sexp.c: In function ‘Sexp_sexp_set’: ./rpy/rinterface/sexp.c:282: warning: implicit declaration of function ‘PyCapsule_CheckExact’ ./rpy/rinterface/sexp.c:288: warning: cast to pointer from integer of different size ./rpy/rinterface/sexp.c: In function ‘Sexp_init’: ./rpy/rinterface/sexp.c:738: warning: unused variable ‘copy’ ./rpy/rinterface/_rinterface.c: In function ‘EmbeddedR_init’: ./rpy/rinterface/_rinterface.c:1333: error: ‘rl_completer_word_break_characters’ undeclared (first use in this function) ./rpy/rinterface/_rinterface.c:1333: error: (Each undeclared identifier is reported only once ./rpy/rinterface/_rinterface.c:1333: error: for each function it appears in.) ./rpy/rinterface/_rinterface.c:1336: error: ‘rl_basic_word_break_characters’ undeclared (first use in this function) ./rpy/rinterface/_rinterface.c: In function ‘init_rinterface’: ./rpy/rinterface/_rinterface.c:3688: warning: assignment makes pointer from integer without a cast error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /usr/bin/python -c "import setuptools;__file__='/tmp/pip-build-root/rpy2/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-BynTYR-record/install-record.txt --single-version-externally-managed failed with error code 1 in /tmp/pip-build-root/rpy2 Storing complete log in /root/.pip/pip.log I know that I need something called adns as mentioned elsewhere, but can't figure out how to install in on RedHat. I tried downloading it and installing from file but that didn't work.
Installing rpy on redhat I'm new to RedHat but have been using Ubuntu for a while. I'm trying to install rpy2 using pip install rpy2 and I get the error /usr/include/features.h:164:1: warning: this is the location of the previous definition ./rpy/rinterface/_rinterface.c:86:31: error: readline/readline.h: No such file or directory In file included from ./rpy/rinterface/_rinterface.c:122: ./rpy/rinterface/embeddedr.c: In function ‘SexpObject_CObject_destroy’: ./rpy/rinterface/embeddedr.c:68: warning: implicit declaration of function ‘PyCapsule_GetPointer’ ./rpy/rinterface/embeddedr.c:69: warning: cast to pointer from integer of different size ./rpy/rinterface/embeddedr.c: In function ‘Rpy_PreserveObject’: ./rpy/rinterface/embeddedr.c:107: warning: implicit declaration of function ‘PyCapsule_New’ ./rpy/rinterface/embeddedr.c:109: warning: assignment makes pointer from integer without a cast ./rpy/rinterface/embeddedr.c:122: warning: cast to pointer from integer of different size ./rpy/rinterface/embeddedr.c: In function ‘Rpy_ReleaseObject’: ./rpy/rinterface/embeddedr.c:178: warning: cast to pointer from integer of different size ./rpy/rinterface/embeddedr.c: In function ‘Rpy_ProtectedIDs’: ./rpy/rinterface/embeddedr.c:301: warning: cast to pointer from integer of different size In file included from ./rpy/rinterface/_rinterface.c:125: ./rpy/rinterface/sexp.c: In function ‘Sexp_sexp_set’: ./rpy/rinterface/sexp.c:282: warning: implicit declaration of function ‘PyCapsule_CheckExact’ ./rpy/rinterface/sexp.c:288: warning: cast to pointer from integer of different size ./rpy/rinterface/sexp.c: In function ‘Sexp_init’: ./rpy/rinterface/sexp.c:738: warning: unused variable ‘copy’ ./rpy/rinterface/_rinterface.c: In function ‘EmbeddedR_init’: ./rpy/rinterface/_rinterface.c:1333: error: ‘rl_completer_word_break_characters’ undeclared (first use in this function) ./rpy/rinterface/_rinterface.c:1333: error: (Each undeclared identifier is reported only once ./rpy/rinterface/_rinterface.c:1333: error: for each function it appears in.) ./rpy/rinterface/_rinterface.c:1336: error: ‘rl_basic_word_break_characters’ undeclared (first use in this function) ./rpy/rinterface/_rinterface.c: In function ‘init_rinterface’: ./rpy/rinterface/_rinterface.c:3688: warning: assignment makes pointer from integer without a cast error: command 'gcc' failed with exit status 1 ---------------------------------------- Command /usr/bin/python -c "import setuptools;__file__='/tmp/pip-build-root/rpy2/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-BynTYR-record/install-record.txt --single-version-externally-managed failed with error code 1 in /tmp/pip-build-root/rpy2 Storing complete log in /root/.pip/pip.log I know that I need something called adns as mentioned elsewhere, but can't figure out how to install in on RedHat. I tried downloading it and installing from file but that didn't work.
python, r, redhat, rpy2
3
1,043
1
https://stackoverflow.com/questions/25504087/installing-rpy-on-redhat
21,941,863
Installing PostgreSQL 9.1 on Red Hat 6.1 x86_64 requires already installed libs
I'm following what is described here to install PostgreSQL 9.1 on a Red Hat 6.1 When I launch yum install postgresql91-server it complains that libssl.so.10 and libcrypto.so.10 are missing, while I've verified that they're available under /usr/lib64/ Here it is the errors I get: postgresql91-server-9.1.12-1PGDG.rhel6.x86_64 --> Finished Dependency Resolution Error: Package: postgresql91-libs-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: postgresql91-server-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: postgresql91-libs-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: postgresql91-server-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: postgresql91-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libssl.so.10(libssl.so.10)(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest What do I miss?
Installing PostgreSQL 9.1 on Red Hat 6.1 x86_64 requires already installed libs I'm following what is described here to install PostgreSQL 9.1 on a Red Hat 6.1 When I launch yum install postgresql91-server it complains that libssl.so.10 and libcrypto.so.10 are missing, while I've verified that they're available under /usr/lib64/ Here it is the errors I get: postgresql91-server-9.1.12-1PGDG.rhel6.x86_64 --> Finished Dependency Resolution Error: Package: postgresql91-libs-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: postgresql91-server-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: postgresql91-libs-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: postgresql91-server-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: postgresql91-9.1.12-1PGDG.rhel6.x86_64 (pgdg91) Requires: libssl.so.10(libssl.so.10)(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest What do I miss?
postgresql, redhat, yum
3
7,194
1
https://stackoverflow.com/questions/21941863/installing-postgresql-9-1-on-red-hat-6-1-x86-64-requires-already-installed-libs
14,379,660
gcc i686 on x86_64 platform
I have some troubles to install GCC i686 on a RHEL X86_64. Indeed, I have to build some 32bit softwares and shared libraries on this platform. I can build these softwares and libraries on 32bit platforms (linux or windows). My questions are at the end of this post. My first problem was this error: (during a buil, under eclipse -helios) In file included from /usr/include/stdlib.h:314, from ../../../../../XXXX.h:19, from /XXXX.c:33: /usr/include/sys/types.h:150: error: duplicate 'unsigned' /usr/include/sys/types.h:151: error: duplicate 'unsigned' /usr/include/sys/types.h:151: error: duplicate 'short' /usr/include/sys/types.h:152: error: duplicate 'unsigned' /usr/include/sys/types.h:152: error: two or more data types in declaration specifiers make: *** [XXXX.o] Error 1 To correct this error, I had to put the stdlib.h include before all the other files, but I have a lot of files, and sometimes this trick did not work anyway. Moreover, I should not modify the source files. I have exactly the same problem when I use a makefile given by a friend, to build a shared library. This makefile works well on his platform (the same as me, RHEL 4.4.6 x86_64). He told me the error appears because I use X86_64 lib, to build a 32bits software (or shared lib). Here's my version of GCC : GCC version [root@localhost bin]# gcc -v Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux thread: posix gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) [root@localhost bin]# rpm -qa |grep gcc gcc-c++-4.4.6-3.el6.x86_64 gcc-4.4.6-3.el6.x86_64 gcc-gfortran-4.4.6-3.el6.x86_64 So first, I installed glibc.i686 and libgcc.i686 packages from the RHEL DVD setup. Now I have: Installed packages (from redhat DVD) [root@localhost bin]# rpm -qa |grep glibc glibc-common-2.12-1.47.el6.x86_64 glibc-2.12-1.47.el6.x86_64 glibc-devel-2.12-1.47.el6.x86_64 glibc-devel-2.12-1.47.el6.i686 glibc-headers-2.12-1.47.el6.x86_64 glibc-2.12-1.47.el6.i686 [root@localhost bin]# rpm -qa |grep libgcc libgcc-4.4.6-3.el6.x86_64 libgcc-4.4.6-3.el6.i686 Since GCC is x86_64, I read some documents about the cross compilation, espcially this one: wiki.osdev.org/GCC_Criss-Compiler So I downloaded: gcc-4.4.6.tar.gz, binutils-2.23.tar.gz, gmp-5.0.2.tar.gz, and mpfr-3.1.1.tar.gz. I put the directories gmp-5.0.2 and mpfr-3.1.1 in gcc-4.4.6 directory (and I renamed gmp-5.0.2 to gmp, and mpfr-3.1.1 to mpfr). I followed the wiki.osdev instructions, that is: export PREFIX=/usr/local/cross export TARGET=i686-elf cd /usr/src mkdir build-binutils build-gcc cd /usr/src/build-binutils ../binutils-x.xx/configure --target=$TARGET --prefix=$PREFIX --disable-nls make all make install cd /usr/src/build-gcc export PATH=$PATH:$PREFIX/bin ../gcc-x.x.x/configure --target=$TARGET --prefix=$PREFIX --disable-nls \ --enable-languages=c,c++ --without-headers make all-gcc make install-gcc ' make all ' and ' make install ' for binutils => OK ' make all-gcc ' --> 1st error: missing "mpfr.h" in "real.h". So I added mpfr.h in gcc-4.4.6/gcc and it was OK (maybe not actually ...) --> 2nd error (the only one now): [...] gcc -g -O2 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wcast-qual -Wold-style-definition -Wc++-compat -Wmissing-format-attribute -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -DHAVE_CONFIG_H -o cc1-dummy c-lang.o stub-objc.o attribs.o c-errors.o c-lex.o c-pragma.o c-decl.o c-typeck.o c-convert.o c-aux-info.o c-common.o c-opts.o c-format.o c-semantics.o c-ppoutput.o c-cppbuiltin.o c-objc-common.o c-dump.o c-pch.o c-parser.o i386-c.o c-gimplify.o tree-mudflap.o c-pretty-print.o c-omp.o dummy-checksum.o \ main.o libbackend.a ../libcpp/libcpp.a ../libdecnumber/libdecnumber.a ../libcpp/libcpp.a ../libiberty/libiberty.a ../libdecnumber/libdecnumber.a -L/usr/src/build-gcc/./gmp/.libs -L/usr/src/build-gcc/./gmp/_libs -L/usr/src/build-gcc/./mpfr/.libs -L/usr/src/build-gcc/./mpfr/_libs -lmpfr -lgmp **/usr/bin/ld: cannot find -lmpfr collect2: ld returned 1 exit status make[1]: *** [cc1-dummy] Error 1 make[1]: Leaving directory `/usr/src/build-gcc/gcc' make: *** [all-gcc] Error 2** **Finally, my questions are : Is this cross-compilation can resolve my problem ? What is the good way to resolve the problem of the missing ld mpfr ?** I made a lot of research before posting. My linux knowledges are not very good at this time. Thank you in advance for your help. EDIT #1 : I've already tried -m32 flag but the problem still here. For example, if I run a makefile: [root@localhost makefile]# make -f sharedLib.mak gcc -m32 -march=i686 -O2 -Wall -I ../../sharedLib/inc/ -o XXX.o -c ../src/XXX.c In file included from /usr/include/stdlib.h:314, from ../src/XXX.c:51: /usr/include/sys/types.h:150: error: duplicate 'unsigned' /usr/include/sys/types.h:151: error: duplicate 'unsigned' /usr/include/sys/types.h:151: error: duplicate 'short' /usr/include/sys/types.h:152: error: duplicate 'unsigned' /usr/include/sys/types.h:152: error: two or more data types in declaration specifiers make: *** [XXX.o] Error 1 Here's XXX.c: #include "alphabet.h" #include "outils.h" #include "erreur.h" #include <string.h> #include <stdlib.h> (line 51 error) If a modify this way: #include <stdlib.h> #include "alphabet.h" #include "outils.h" #include "erreur.h" #include <string.h> Everything is OK for XXX.c but the error appears for the next source file ...
gcc i686 on x86_64 platform I have some troubles to install GCC i686 on a RHEL X86_64. Indeed, I have to build some 32bit softwares and shared libraries on this platform. I can build these softwares and libraries on 32bit platforms (linux or windows). My questions are at the end of this post. My first problem was this error: (during a buil, under eclipse -helios) In file included from /usr/include/stdlib.h:314, from ../../../../../XXXX.h:19, from /XXXX.c:33: /usr/include/sys/types.h:150: error: duplicate 'unsigned' /usr/include/sys/types.h:151: error: duplicate 'unsigned' /usr/include/sys/types.h:151: error: duplicate 'short' /usr/include/sys/types.h:152: error: duplicate 'unsigned' /usr/include/sys/types.h:152: error: two or more data types in declaration specifiers make: *** [XXXX.o] Error 1 To correct this error, I had to put the stdlib.h include before all the other files, but I have a lot of files, and sometimes this trick did not work anyway. Moreover, I should not modify the source files. I have exactly the same problem when I use a makefile given by a friend, to build a shared library. This makefile works well on his platform (the same as me, RHEL 4.4.6 x86_64). He told me the error appears because I use X86_64 lib, to build a 32bits software (or shared lib). Here's my version of GCC : GCC version [root@localhost bin]# gcc -v Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux thread: posix gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) [root@localhost bin]# rpm -qa |grep gcc gcc-c++-4.4.6-3.el6.x86_64 gcc-4.4.6-3.el6.x86_64 gcc-gfortran-4.4.6-3.el6.x86_64 So first, I installed glibc.i686 and libgcc.i686 packages from the RHEL DVD setup. Now I have: Installed packages (from redhat DVD) [root@localhost bin]# rpm -qa |grep glibc glibc-common-2.12-1.47.el6.x86_64 glibc-2.12-1.47.el6.x86_64 glibc-devel-2.12-1.47.el6.x86_64 glibc-devel-2.12-1.47.el6.i686 glibc-headers-2.12-1.47.el6.x86_64 glibc-2.12-1.47.el6.i686 [root@localhost bin]# rpm -qa |grep libgcc libgcc-4.4.6-3.el6.x86_64 libgcc-4.4.6-3.el6.i686 Since GCC is x86_64, I read some documents about the cross compilation, espcially this one: wiki.osdev.org/GCC_Criss-Compiler So I downloaded: gcc-4.4.6.tar.gz, binutils-2.23.tar.gz, gmp-5.0.2.tar.gz, and mpfr-3.1.1.tar.gz. I put the directories gmp-5.0.2 and mpfr-3.1.1 in gcc-4.4.6 directory (and I renamed gmp-5.0.2 to gmp, and mpfr-3.1.1 to mpfr). I followed the wiki.osdev instructions, that is: export PREFIX=/usr/local/cross export TARGET=i686-elf cd /usr/src mkdir build-binutils build-gcc cd /usr/src/build-binutils ../binutils-x.xx/configure --target=$TARGET --prefix=$PREFIX --disable-nls make all make install cd /usr/src/build-gcc export PATH=$PATH:$PREFIX/bin ../gcc-x.x.x/configure --target=$TARGET --prefix=$PREFIX --disable-nls \ --enable-languages=c,c++ --without-headers make all-gcc make install-gcc ' make all ' and ' make install ' for binutils => OK ' make all-gcc ' --> 1st error: missing "mpfr.h" in "real.h". So I added mpfr.h in gcc-4.4.6/gcc and it was OK (maybe not actually ...) --> 2nd error (the only one now): [...] gcc -g -O2 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wcast-qual -Wold-style-definition -Wc++-compat -Wmissing-format-attribute -pedantic -Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -DHAVE_CONFIG_H -o cc1-dummy c-lang.o stub-objc.o attribs.o c-errors.o c-lex.o c-pragma.o c-decl.o c-typeck.o c-convert.o c-aux-info.o c-common.o c-opts.o c-format.o c-semantics.o c-ppoutput.o c-cppbuiltin.o c-objc-common.o c-dump.o c-pch.o c-parser.o i386-c.o c-gimplify.o tree-mudflap.o c-pretty-print.o c-omp.o dummy-checksum.o \ main.o libbackend.a ../libcpp/libcpp.a ../libdecnumber/libdecnumber.a ../libcpp/libcpp.a ../libiberty/libiberty.a ../libdecnumber/libdecnumber.a -L/usr/src/build-gcc/./gmp/.libs -L/usr/src/build-gcc/./gmp/_libs -L/usr/src/build-gcc/./mpfr/.libs -L/usr/src/build-gcc/./mpfr/_libs -lmpfr -lgmp **/usr/bin/ld: cannot find -lmpfr collect2: ld returned 1 exit status make[1]: *** [cc1-dummy] Error 1 make[1]: Leaving directory `/usr/src/build-gcc/gcc' make: *** [all-gcc] Error 2** **Finally, my questions are : Is this cross-compilation can resolve my problem ? What is the good way to resolve the problem of the missing ld mpfr ?** I made a lot of research before posting. My linux knowledges are not very good at this time. Thank you in advance for your help. EDIT #1 : I've already tried -m32 flag but the problem still here. For example, if I run a makefile: [root@localhost makefile]# make -f sharedLib.mak gcc -m32 -march=i686 -O2 -Wall -I ../../sharedLib/inc/ -o XXX.o -c ../src/XXX.c In file included from /usr/include/stdlib.h:314, from ../src/XXX.c:51: /usr/include/sys/types.h:150: error: duplicate 'unsigned' /usr/include/sys/types.h:151: error: duplicate 'unsigned' /usr/include/sys/types.h:151: error: duplicate 'short' /usr/include/sys/types.h:152: error: duplicate 'unsigned' /usr/include/sys/types.h:152: error: two or more data types in declaration specifiers make: *** [XXX.o] Error 1 Here's XXX.c: #include "alphabet.h" #include "outils.h" #include "erreur.h" #include <string.h> #include <stdlib.h> (line 51 error) If a modify this way: #include <stdlib.h> #include "alphabet.h" #include "outils.h" #include "erreur.h" #include <string.h> Everything is OK for XXX.c but the error appears for the next source file ...
linux, gcc, redhat
3
25,258
1
https://stackoverflow.com/questions/14379660/gcc-i686-on-x86-64-platform
9,739,653
Perl system calls when running as another user using sudo
I have developed a perl script which provides a menu driven functionality to allow users to carry out some simple tasks. I need the users to be able to carry out tasks such as copying files (keeping the current date and permissions), running other programs (such as less or vi) as a different user. The script uses alot of use of the system() function. I want the users to start the menu by calling: sudo -u perluser /usr/bin/perl /data/perlscripts/scripta.pl This should start the script as perl user, which it does, and then carry out different tasks depending on what the user selects. The problem is that whenever I use a system call such as system("clear"); I get the following error Can't exec "clear": Permission denied at /data/perlscripts/scripta.pl line 3 If I run the script by logging in as perluser then it all runs succesfully. Is there any way to get this working? I do not want users to be able to log in as perluser as I need to control what they are able to run. I also do not want to run a command like system("sudo -u perluser clear"); as I would then require a different team to set up all the sudo commands I wanted to run (which they will probably refuse to do) and this would not be scalable if I have to add extra commands at somepoint. Thanks,
Perl system calls when running as another user using sudo I have developed a perl script which provides a menu driven functionality to allow users to carry out some simple tasks. I need the users to be able to carry out tasks such as copying files (keeping the current date and permissions), running other programs (such as less or vi) as a different user. The script uses alot of use of the system() function. I want the users to start the menu by calling: sudo -u perluser /usr/bin/perl /data/perlscripts/scripta.pl This should start the script as perl user, which it does, and then carry out different tasks depending on what the user selects. The problem is that whenever I use a system call such as system("clear"); I get the following error Can't exec "clear": Permission denied at /data/perlscripts/scripta.pl line 3 If I run the script by logging in as perluser then it all runs succesfully. Is there any way to get this working? I do not want users to be able to log in as perluser as I need to control what they are able to run. I also do not want to run a command like system("sudo -u perluser clear"); as I would then require a different team to set up all the sudo commands I wanted to run (which they will probably refuse to do) and this would not be scalable if I have to add extra commands at somepoint. Thanks,
linux, perl, redhat
3
9,588
2
https://stackoverflow.com/questions/9739653/perl-system-calls-when-running-as-another-user-using-sudo
128,933
Perl or Python script to remove user from group
I am putting together a Samba-based server as a Primary Domain Controller, and ran into a cute little problem that should have been solved many times over. But a number of searches did not yield a result. I need to be able to remove an existing user from an existing group with a command line script. It appears that the usermod easily allows me to add a user to a supplementary group with this command: usermod -a -G supgroup1,supgroup2 username Without the "-a" option, if the user is currently a member of a group which is not listed, the user will be removed from the group. Does anyone have a perl (or Python) script that allows the specification of a user and group for removal? Am I missing an obvious existing command, or well-known solution forthis? Thanks in advance! Thanks to J.J. for the pointer to the Unix::Group module, which is part of Unix-ConfigFile. It looks like the command deluser would do what I want, but was not in any of my existing repositories. I went ahead and wrote the perl script using the Unix:Group Module. Here is the script for your sysadmining pleasure. #!/usr/bin/perl # # Usage: removegroup.pl login group # Purpose: Removes a user from a group while retaining current primary and # supplementary groups. # Notes: There is a Debian specific utility that can do this called deluser, # but I did not want any cross-distribution dependencies # # Date: 25 September 2008 # Validate Arguments (correct number, format etc.) if ( ($#ARGV < 1) || (2 < $#ARGV) ) { print "\nUsage: removegroup.pl login group\n\n"; print "EXIT VALUES\n"; print " The removeuser.pl script exits with the following values:\n\n"; print " 0 success\n\n"; print " 1 Invalid number of arguments\n\n"; print " 2 Login or Group name supplied greater than 16 characters\n\n"; print " 3 Login and/or Group name contains invalid characters\n\n"; exit 1; } # Check for well formed group and login names if ((16 < length($ARGV[0])) ||(16 < length($ARGV[1]))) { print "Usage: removegroup.pl login group\n"; print "ERROR: Login and Group names must be less than 16 Characters\n"; exit 2; } if ( ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$}) || ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$} ) ) { print "Usage: removegroup.pl login group\n"; print "ERROR: Login and/or Group name contains invalid characters\n"; exit 3; } # Set some variables for readability $login=$ARGV[0]; $group=$ARGV[1]; # Requires the GroupFile interface from perl-Unix-Configfile use Unix::GroupFile; $grp = new Unix::GroupFile "/etc/group"; $grp->remove_user("$group", "$login"); $grp->commit(); undef $grp; exit 0;
Perl or Python script to remove user from group I am putting together a Samba-based server as a Primary Domain Controller, and ran into a cute little problem that should have been solved many times over. But a number of searches did not yield a result. I need to be able to remove an existing user from an existing group with a command line script. It appears that the usermod easily allows me to add a user to a supplementary group with this command: usermod -a -G supgroup1,supgroup2 username Without the "-a" option, if the user is currently a member of a group which is not listed, the user will be removed from the group. Does anyone have a perl (or Python) script that allows the specification of a user and group for removal? Am I missing an obvious existing command, or well-known solution forthis? Thanks in advance! Thanks to J.J. for the pointer to the Unix::Group module, which is part of Unix-ConfigFile. It looks like the command deluser would do what I want, but was not in any of my existing repositories. I went ahead and wrote the perl script using the Unix:Group Module. Here is the script for your sysadmining pleasure. #!/usr/bin/perl # # Usage: removegroup.pl login group # Purpose: Removes a user from a group while retaining current primary and # supplementary groups. # Notes: There is a Debian specific utility that can do this called deluser, # but I did not want any cross-distribution dependencies # # Date: 25 September 2008 # Validate Arguments (correct number, format etc.) if ( ($#ARGV < 1) || (2 < $#ARGV) ) { print "\nUsage: removegroup.pl login group\n\n"; print "EXIT VALUES\n"; print " The removeuser.pl script exits with the following values:\n\n"; print " 0 success\n\n"; print " 1 Invalid number of arguments\n\n"; print " 2 Login or Group name supplied greater than 16 characters\n\n"; print " 3 Login and/or Group name contains invalid characters\n\n"; exit 1; } # Check for well formed group and login names if ((16 < length($ARGV[0])) ||(16 < length($ARGV[1]))) { print "Usage: removegroup.pl login group\n"; print "ERROR: Login and Group names must be less than 16 Characters\n"; exit 2; } if ( ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$}) || ( $ARGV[0] !~ m{^[a-z_]+[a-z0-9_-]*$} ) ) { print "Usage: removegroup.pl login group\n"; print "ERROR: Login and/or Group name contains invalid characters\n"; exit 3; } # Set some variables for readability $login=$ARGV[0]; $group=$ARGV[1]; # Requires the GroupFile interface from perl-Unix-Configfile use Unix::GroupFile; $grp = new Unix::GroupFile "/etc/group"; $grp->remove_user("$group", "$login"); $grp->commit(); undef $grp; exit 0;
python, perl, system-administration, centos, redhat
3
3,389
4
https://stackoverflow.com/questions/128933/perl-or-python-script-to-remove-user-from-group
60,933,161
Where do I view my CronJob in OpenShift Container Platform?
This is a really basic question, that I just can't seem to find it ANYWHERE. I need to create a CronJob on OpenShift Container Platform . I wasn't able to find a page on the Container Platform on how to directly create a CronJob. But I did manage to find instruction on creating it by pasting the Job yaml file in the Add to Application Button. [URL] Now, having created a CronJob (I think). Lol, how do I even find/modify/delete it on Container Platform ?
Where do I view my CronJob in OpenShift Container Platform? This is a really basic question, that I just can't seem to find it ANYWHERE. I need to create a CronJob on OpenShift Container Platform . I wasn't able to find a page on the Container Platform on how to directly create a CronJob. But I did manage to find instruction on creating it by pasting the Job yaml file in the Add to Application Button. [URL] Now, having created a CronJob (I think). Lol, how do I even find/modify/delete it on Container Platform ?
kubernetes, cron, openshift, redhat
3
3,020
2
https://stackoverflow.com/questions/60933161/where-do-i-view-my-cronjob-in-openshift-container-platform
47,538,754
How do I install docker on RHEL 7 offline?
New to docker. Need to install docker on a RHEL 7 (no gui) system. Does the RHEL 7 installation come with docker already on it? If not, where do I get it from? (I cannot use the docker software at docker.com, it has to come from RedHat - government rules, not mine) Once procured, how do I install it on a system that is not connected to the internet. I hope I've made my request as simple as possible, let the questions begin.
How do I install docker on RHEL 7 offline? New to docker. Need to install docker on a RHEL 7 (no gui) system. Does the RHEL 7 installation come with docker already on it? If not, where do I get it from? (I cannot use the docker software at docker.com, it has to come from RedHat - government rules, not mine) Once procured, how do I install it on a system that is not connected to the internet. I hope I've made my request as simple as possible, let the questions begin.
docker, redhat, rhel7, redhat-containers
3
11,407
3
https://stackoverflow.com/questions/47538754/how-do-i-install-docker-on-rhel-7-offline
41,975,192
R: compilation of multicool fails
I want to install the rugarch library on a system running Red Hat 7.3 and R version 3.3.1. Unfortunately, I do not have admin rights on the machine. The installation of rugarch fails due to a compilation errror of multicool . Running install.packages('multicool') terminates with the error mesage: compilation aborted for multicool.cpp (code 2) make: *** [multicool.o] Error 2 ERROR: compilation failed for package ‘multicool’ And here is the full output: > install.packages('multicool') Installing package into ‘/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3’ (as ‘lib’ is unspecified) trying URL '[URL] Content type 'unknown' length 11387 bytes (11 KB) ================================================== downloaded 11 KB * installing *source* package ‘multicool’ ... ** package ‘multicool’ successfully unpacked and MD5 sums checked ** libs icpc -I/opt/bwhpc/common/math/R/3.3.1-mkl-11.2.3-intel-15.0_O2_pragma_noopt/lib64/R/include -DNDEBUG -I/usr/local/include -I"/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include" -fpic -O2 -std=c++11 -fp-model strict -openmp -xHost -c RcppExports.cpp -o RcppExports.o icpc -I/opt/bwhpc/common/math/R/3.3.1-mkl-11.2.3-intel-15.0_O2_pragma_noopt/lib64/R/include -DNDEBUG -I/usr/local/include -I"/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include" -fpic -O2 -std=c++11 -fp-model strict -openmp -xHost -c compositions.cpp -o compositions.o icpc -I/opt/bwhpc/common/math/R/3.3.1-mkl-11.2.3-intel-15.0_O2_pragma_noopt/lib64/R/include -DNDEBUG -I/usr/local/include -I"/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include" -fpic -O2 -std=c++11 -fp-model strict -openmp -xHost -c multicool.cpp -o multicool.o In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(69): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible _M_value = __z._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(77): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible _M_value = __z._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(115): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value + __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(115): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value + __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(120): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value - __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(120): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value - __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(125): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value * __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(125): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value * __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(130): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value / __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(130): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value / __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(134): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return -__x._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(141): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value == __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(141): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value == __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(146): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value != __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(146): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value != __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(150): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return ~__z._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(187): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value + __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(187): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value + __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(192): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value - __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(192): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value - __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(197): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value * __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(197): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value * __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(202): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value / __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(202): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value / __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(206): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return -__x._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(211): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value == __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(211): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value == __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(216): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value != __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(216): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value != __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(220): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return ~__z._M_value; ^ compilation aborted for multicool.cpp (code 2) make: *** [multicool.o] Error 2 ERROR: compilation failed for package ‘multicool’ * removing ‘/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/multicool’ The downloaded source packages are in ‘/scratch/RtmpQPqfE3/downloaded_packages’ Warning message: In install.packages("multicool") : installation of package ‘multicool’ had non-zero exit status Is there any way to install the library?
R: compilation of multicool fails I want to install the rugarch library on a system running Red Hat 7.3 and R version 3.3.1. Unfortunately, I do not have admin rights on the machine. The installation of rugarch fails due to a compilation errror of multicool . Running install.packages('multicool') terminates with the error mesage: compilation aborted for multicool.cpp (code 2) make: *** [multicool.o] Error 2 ERROR: compilation failed for package ‘multicool’ And here is the full output: > install.packages('multicool') Installing package into ‘/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3’ (as ‘lib’ is unspecified) trying URL '[URL] Content type 'unknown' length 11387 bytes (11 KB) ================================================== downloaded 11 KB * installing *source* package ‘multicool’ ... ** package ‘multicool’ successfully unpacked and MD5 sums checked ** libs icpc -I/opt/bwhpc/common/math/R/3.3.1-mkl-11.2.3-intel-15.0_O2_pragma_noopt/lib64/R/include -DNDEBUG -I/usr/local/include -I"/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include" -fpic -O2 -std=c++11 -fp-model strict -openmp -xHost -c RcppExports.cpp -o RcppExports.o icpc -I/opt/bwhpc/common/math/R/3.3.1-mkl-11.2.3-intel-15.0_O2_pragma_noopt/lib64/R/include -DNDEBUG -I/usr/local/include -I"/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include" -fpic -O2 -std=c++11 -fp-model strict -openmp -xHost -c compositions.cpp -o compositions.o icpc -I/opt/bwhpc/common/math/R/3.3.1-mkl-11.2.3-intel-15.0_O2_pragma_noopt/lib64/R/include -DNDEBUG -I/usr/local/include -I"/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include" -fpic -O2 -std=c++11 -fp-model strict -openmp -xHost -c multicool.cpp -o multicool.o In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(69): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible _M_value = __z._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(77): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible _M_value = __z._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(115): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value + __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(115): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value + __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(120): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value - __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(120): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value - __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(125): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value * __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(125): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value * __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(130): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value / __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(130): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value / __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(134): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return -__x._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(141): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value == __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(141): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value == __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(146): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value != __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(146): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value != __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(150): error #308: member "std::complex<float>::_M_value" (declared at line 1187 of "/usr/include/c++/4.8.5/complex") is inaccessible return ~__z._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(187): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value + __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(187): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value + __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(192): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value - __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(192): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value - __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(197): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value * __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(197): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value * __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(202): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value / __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(202): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value / __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(206): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return -__x._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(211): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value == __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(211): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value == __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(216): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value != __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(216): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return __x._M_value != __y._M_value; ^ In file included from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/RcppCommon.h(64), from /pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/Rcpp/include/Rcpp.h(27), from multicool.cpp(8): /pfs/data1/software_uc1/bwhpc/common/compiler/intel/compxe.2015.3.187/composer_xe_2015.3.187/compiler/include/complex(220): error #308: member "std::complex<double>::_M_value" (declared at line 1337 of "/usr/include/c++/4.8.5/complex") is inaccessible return ~__z._M_value; ^ compilation aborted for multicool.cpp (code 2) make: *** [multicool.o] Error 2 ERROR: compilation failed for package ‘multicool’ * removing ‘/pfs/data1/home/kn/kn_kn/kn_pop260093/R/x86_64-pc-linux-gnu-library/3.3/multicool’ The downloaded source packages are in ‘/scratch/RtmpQPqfE3/downloaded_packages’ Warning message: In install.packages("multicool") : installation of package ‘multicool’ had non-zero exit status Is there any way to install the library?
r, compiler-errors, compilation, redhat
3
751
2
https://stackoverflow.com/questions/41975192/r-compilation-of-multicool-fails
37,652,604
install mysql without having internet nor cd/dvd iso on redhat
I want to install mysql server on a remote server but I haven't internet access and I haven't cd/dvd iso. Is it possible to download all mysql repository locally? Actually if I execute yum install mysql* , I have this error : [URL] [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'repo.mysql.com'" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: mysql-connectors-community. Please verify its path and try again Does anyone have a solution? Thanks.
install mysql without having internet nor cd/dvd iso on redhat I want to install mysql server on a remote server but I haven't internet access and I haven't cd/dvd iso. Is it possible to download all mysql repository locally? Actually if I execute yum install mysql* , I have this error : [URL] [Errno 14] PYCURL ERROR 6 - "Couldn't resolve host 'repo.mysql.com'" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: mysql-connectors-community. Please verify its path and try again Does anyone have a solution? Thanks.
mysql, redhat
3
12,404
2
https://stackoverflow.com/questions/37652604/install-mysql-without-having-internet-nor-cd-dvd-iso-on-redhat
34,660,174
Relocated path in a postinstall script
I'm working on an RPM package that deploys files to /opt and /etc. In most of the cases it works perfectly, excepted that for a given environment, writing to /etc is not allowed .... So I used Relocations in order to deploy the /etc files in some other location : Relocations : /opt /etc By specifying --relocate option I can deploy the /etc files into another location : rpm -ivh --relocate /etc=/my/path/to/etc mypackage.rpm Now the issue is that in the postinstall script, there are some hard coded references to /etc that don't get replaced when the package is deployed : echo hostname --fqdn > /etc/myapp/host.conf I hope that there is a way (macro, keyword, ... ) to use instead of hard coded paths in order to perform the substitutions during rpm execution. If you have any information on this I'd really appreciate some help. Thanks per advance PS : Please note that this is NOT a duplicate of the previously asked (and answered) questions related to the root path re-locations as we're dealing with several relocation paths and the fact that we need to handle each of them separately during rpm scriptlets
Relocated path in a postinstall script I'm working on an RPM package that deploys files to /opt and /etc. In most of the cases it works perfectly, excepted that for a given environment, writing to /etc is not allowed .... So I used Relocations in order to deploy the /etc files in some other location : Relocations : /opt /etc By specifying --relocate option I can deploy the /etc files into another location : rpm -ivh --relocate /etc=/my/path/to/etc mypackage.rpm Now the issue is that in the postinstall script, there are some hard coded references to /etc that don't get replaced when the package is deployed : echo hostname --fqdn > /etc/myapp/host.conf I hope that there is a way (macro, keyword, ... ) to use instead of hard coded paths in order to perform the substitutions during rpm execution. If you have any information on this I'd really appreciate some help. Thanks per advance PS : Please note that this is NOT a duplicate of the previously asked (and answered) questions related to the root path re-locations as we're dealing with several relocation paths and the fact that we need to handle each of them separately during rpm scriptlets
centos, redhat, rpm, rpm-spec
3
920
1
https://stackoverflow.com/questions/34660174/relocated-path-in-a-postinstall-script
34,349,675
&quot;no such instruction error&quot; when assembling an array declaration:
I have the following piece of x86 assembly code: 1 2 .text 3 4 .data 5 6 # define an array of 3 dwords 7 array_word DW 1, 2, 3 8 9 10 .globl main 11 12main: 13 # nothing interesting .. 14 But when I compile this, I keep getting the following error: $ gcc my_asm.s my_asm.s: Assembler messages: my_asm.s:7: Error: no such instruction: `array_word DW 1,2,3' This is the gcc I use: $ gcc --version gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16) Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
&quot;no such instruction error&quot; when assembling an array declaration: I have the following piece of x86 assembly code: 1 2 .text 3 4 .data 5 6 # define an array of 3 dwords 7 array_word DW 1, 2, 3 8 9 10 .globl main 11 12main: 13 # nothing interesting .. 14 But when I compile this, I keep getting the following error: $ gcc my_asm.s my_asm.s: Assembler messages: my_asm.s:7: Error: no such instruction: `array_word DW 1,2,3' This is the gcc I use: $ gcc --version gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16) Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
arrays, assembly, x86, redhat
3
3,970
1
https://stackoverflow.com/questions/34349675/no-such-instruction-error-when-assembling-an-array-declaration
31,728,941
Changing the location of git installation on linux
I apologize if this seems basic but I'm new to linux and not really sure how to proceed. My current git version is 1.7.1 and is located in /usr/bin/git but a newer version of git (1.8) is now available in /usr/src/git/bin/git. How do I make git use this version by default as opposed to the 1.7.1 version?
Changing the location of git installation on linux I apologize if this seems basic but I'm new to linux and not really sure how to proceed. My current git version is 1.7.1 and is located in /usr/bin/git but a newer version of git (1.8) is now available in /usr/src/git/bin/git. How do I make git use this version by default as opposed to the 1.7.1 version?
linux, git, redhat
3
3,835
1
https://stackoverflow.com/questions/31728941/changing-the-location-of-git-installation-on-linux
23,638,157
Add user id to bash history
Currently when I do history I get: 996 05/13/14 10:37 ls-l 997 05/13/14 10:37 ls -l 998 05/13/14 10:37 chmod 700 hosts.txt 999 05/13/14 10:37 tail -5 .bash_history 1000 05/13/14 10:37 tail -10 .bash_history 1001 05/13/14 10:38 hisotry Is it possible to change the mechanism to that when it becomes: 996 username1 05/13/14 10:37 ls-l 997 username1 05/13/14 10:37 ls -l 998 username2 05/13/14 10:37 chmod 700 hosts.txt 999 username3 05/13/14 10:37 tail -5 .bash_history 1000 username1 05/13/14 10:37 tail -10 .bash_history 1001 username4 05/13/14 10:38 hisotry I tried editing the PROMPT_COMMAND but was unable to get the result I want. The scenario that I am dealing with is several users sudo to another user and each user runs several commands. What I need is track which user ran which command.
Add user id to bash history Currently when I do history I get: 996 05/13/14 10:37 ls-l 997 05/13/14 10:37 ls -l 998 05/13/14 10:37 chmod 700 hosts.txt 999 05/13/14 10:37 tail -5 .bash_history 1000 05/13/14 10:37 tail -10 .bash_history 1001 05/13/14 10:38 hisotry Is it possible to change the mechanism to that when it becomes: 996 username1 05/13/14 10:37 ls-l 997 username1 05/13/14 10:37 ls -l 998 username2 05/13/14 10:37 chmod 700 hosts.txt 999 username3 05/13/14 10:37 tail -5 .bash_history 1000 username1 05/13/14 10:37 tail -10 .bash_history 1001 username4 05/13/14 10:38 hisotry I tried editing the PROMPT_COMMAND but was unable to get the result I want. The scenario that I am dealing with is several users sudo to another user and each user runs several commands. What I need is track which user ran which command.
linux, bash, history, redhat
3
6,317
4
https://stackoverflow.com/questions/23638157/add-user-id-to-bash-history
19,610,714
Installing 3rd party packages in kickstart on redhat
I have been trying to work out how to add my own packages as part of a kickstart install (specifically mondo packages) but using the %packages directive as opposed to rpm commands in the post scripts. I tried adding them to the packages file with my %include statement in the kickstart file, and copied the RPM's to the RH linux/Packages directory, however these packages don't get installed. I read something about comps.xml but dont have that file in the RHEL distribution, or know what the procedure is. Essentially I have a package list which I include like this: # cat packages.txt openssh-clients openssh-server afio-2.5-1.rhel6.x86_64.rpm buffer-1.19-4.rhel6.x86_64.rpm mindi-2.1.7-1.rhel6.x86_64.rpm mindi-busybox-1.18.5-3.rhel6.x86_64.rpm mondo-3.0.4-1.rhel6.x86_64.rpm All the rpms from afio down are custom ones not part of the RH installation. Could someone tell me how this can be done? thanks
Installing 3rd party packages in kickstart on redhat I have been trying to work out how to add my own packages as part of a kickstart install (specifically mondo packages) but using the %packages directive as opposed to rpm commands in the post scripts. I tried adding them to the packages file with my %include statement in the kickstart file, and copied the RPM's to the RH linux/Packages directory, however these packages don't get installed. I read something about comps.xml but dont have that file in the RHEL distribution, or know what the procedure is. Essentially I have a package list which I include like this: # cat packages.txt openssh-clients openssh-server afio-2.5-1.rhel6.x86_64.rpm buffer-1.19-4.rhel6.x86_64.rpm mindi-2.1.7-1.rhel6.x86_64.rpm mindi-busybox-1.18.5-3.rhel6.x86_64.rpm mondo-3.0.4-1.rhel6.x86_64.rpm All the rpms from afio down are custom ones not part of the RH installation. Could someone tell me how this can be done? thanks
redhat
3
9,027
1
https://stackoverflow.com/questions/19610714/installing-3rd-party-packages-in-kickstart-on-redhat
16,701,938
How to import/export repository from SVN to new SVN
I have an old Subversion on one server and another new one on another server. I would like to export the head revision from the old repository and import it into the new one. I have tried the below which seems to export, but I can't get it to import into the new one. svn export --depth immediates file:///repositories/repo1/ /home/me/repo-export This is what I am trying for import: svn import /home/me/repo-export/ /svnroot/ How can this be done via the Linux ( Red Hat Linux 4) command line?
How to import/export repository from SVN to new SVN I have an old Subversion on one server and another new one on another server. I would like to export the head revision from the old repository and import it into the new one. I have tried the below which seems to export, but I can't get it to import into the new one. svn export --depth immediates file:///repositories/repo1/ /home/me/repo-export This is what I am trying for import: svn import /home/me/repo-export/ /svnroot/ How can this be done via the Linux ( Red Hat Linux 4) command line?
linux, svn, redhat, repository
3
11,682
1
https://stackoverflow.com/questions/16701938/how-to-import-export-repository-from-svn-to-new-svn
16,318,584
Where can I get xmkmf for RedHat Linux?
I'm trying to build GLUT, but I fail on: xmkmf: Command not found Where can I find this?
Where can I get xmkmf for RedHat Linux? I'm trying to build GLUT, but I fail on: xmkmf: Command not found Where can I find this?
linux, redhat
3
6,058
2
https://stackoverflow.com/questions/16318584/where-can-i-get-xmkmf-for-redhat-linux
13,218,013
Redhat openshift - Cron Runtime - Is there a default time for how long cron executes
Cron on Redhat openshift is cancelled by SIGTERM after some minutes. Is there a default timeout on how long cron tasks can execute? If yes, how to get long running tasks working?
Redhat openshift - Cron Runtime - Is there a default time for how long cron executes Cron on Redhat openshift is cancelled by SIGTERM after some minutes. Is there a default timeout on how long cron tasks can execute? If yes, how to get long running tasks working?
cron, redhat, openshift
3
877
1
https://stackoverflow.com/questions/13218013/redhat-openshift-cron-runtime-is-there-a-default-time-for-how-long-cron-exec
12,813,203
how to solve &#39;java.lang.OutOfMemoryError: GC overhead limit exceeded&#39;
I read this stack overflow page about solving this problem and tried adding the command line option -XX:-UseGCOverheadLimit and also "-Xmx" arguments. However, my program still threw the out of memory error. The program saves a large number (>40,000 keys) of words into a MultiKeyMap and is running on a server with plenty of memory. Any suggestions on how I can aviod the error?
how to solve &#39;java.lang.OutOfMemoryError: GC overhead limit exceeded&#39; I read this stack overflow page about solving this problem and tried adding the command line option -XX:-UseGCOverheadLimit and also "-Xmx" arguments. However, my program still threw the out of memory error. The program saves a large number (>40,000 keys) of words into a MultiKeyMap and is running on a server with plenty of memory. Any suggestions on how I can aviod the error?
java, memory, redhat
3
13,607
3
https://stackoverflow.com/questions/12813203/how-to-solve-java-lang-outofmemoryerror-gc-overhead-limit-exceeded
12,280,872
installing php-devel on RHEL6 (PHP 5.3.3)
I'm trying in vain to get the php oci_* extensions installed on our server, but i've hit a brick wall. So far i've done this: Installed oracle basic & devel libraries (v10.2) Installed php-pear package Now I'm trying to install oci8 using "pecl install oci8" but I get an error message about "phpize" command not being found. My googling tells me that that is caused by "php-devel" not being installed, so i tried various different yum searches, e.g. "yum search php-devel", "yum search php5-devel", "yum search php-dev", etc... none of which could find anything. I eventually found a repository hosted by "utterramblings" which had php-devel. So, now when I do a yum search using that repository, it can find "php-devel": php-devel.i386 : Files needed for building PHP extensions But when I try to install it I get this: Error: Package: php-devel-5.2.17-jason.2.i386 (utterramblings) Requires: php = 5.2.17-jason.2 Installed: php-5.3.3-14.el6_3.i686 (@rhel-i386-server-6) php = 5.3.3-14.el6_3 Available: php-5.2.13-jason.1.i386 (utterramblings) php = 5.2.13-jason.1 Available: php-5.2.14-jason.1.i386 (utterramblings) php = 5.2.14-jason.1 Available: php-5.2.16-jason.1.i386 (utterramblings) php = 5.2.16-jason.1 Available: php-5.2.17-jason.2.i386 (utterramblings) php = 5.2.17-jason.2 Available: php-5.3.2-6.el6.i686 (rhel-i386-server-6) php = 5.3.2-6.el6 Available: php-5.3.2-6.el6_0.1.i686 (rhel-i386-server-6) php = 5.3.2-6.el6_0.1 Available: php-5.3.3-3.el6.i686 (rhel-i386-server-6) php = 5.3.3-3.el6 Available: php-5.3.3-3.el6_1.3.i686 (rhel-i386-server-6) php = 5.3.3-3.el6_1.3 Available: php-5.3.3-3.el6_2.5.i686 (rhel-i386-server-6) php = 5.3.3-3.el6_2.5 Available: php-5.3.3-3.el6_2.6.i686 (rhel-i386-server-6) php = 5.3.3-3.el6_2.6 Available: php-5.3.3-3.el6_2.8.i686 (rhel-i386-server-6) php = 5.3.3-3.el6_2.8 And to be honest, i'm not sure how to resolve that, presumably it has something to do with the version of php we have installed, but i'm not sure what I need to do to fix it. These are our details: Red Hat Enterprise Linux Server release 6.1 (Santiago) [32bit] PHP 5.3.3 Could anyone please advise me as to either: a) what I need to do to resolve that issue and get php-devel installed from that repo OR b) point me in the direction of another repo which will allow me to easily install php-devel for our server Thank you.
installing php-devel on RHEL6 (PHP 5.3.3) I'm trying in vain to get the php oci_* extensions installed on our server, but i've hit a brick wall. So far i've done this: Installed oracle basic & devel libraries (v10.2) Installed php-pear package Now I'm trying to install oci8 using "pecl install oci8" but I get an error message about "phpize" command not being found. My googling tells me that that is caused by "php-devel" not being installed, so i tried various different yum searches, e.g. "yum search php-devel", "yum search php5-devel", "yum search php-dev", etc... none of which could find anything. I eventually found a repository hosted by "utterramblings" which had php-devel. So, now when I do a yum search using that repository, it can find "php-devel": php-devel.i386 : Files needed for building PHP extensions But when I try to install it I get this: Error: Package: php-devel-5.2.17-jason.2.i386 (utterramblings) Requires: php = 5.2.17-jason.2 Installed: php-5.3.3-14.el6_3.i686 (@rhel-i386-server-6) php = 5.3.3-14.el6_3 Available: php-5.2.13-jason.1.i386 (utterramblings) php = 5.2.13-jason.1 Available: php-5.2.14-jason.1.i386 (utterramblings) php = 5.2.14-jason.1 Available: php-5.2.16-jason.1.i386 (utterramblings) php = 5.2.16-jason.1 Available: php-5.2.17-jason.2.i386 (utterramblings) php = 5.2.17-jason.2 Available: php-5.3.2-6.el6.i686 (rhel-i386-server-6) php = 5.3.2-6.el6 Available: php-5.3.2-6.el6_0.1.i686 (rhel-i386-server-6) php = 5.3.2-6.el6_0.1 Available: php-5.3.3-3.el6.i686 (rhel-i386-server-6) php = 5.3.3-3.el6 Available: php-5.3.3-3.el6_1.3.i686 (rhel-i386-server-6) php = 5.3.3-3.el6_1.3 Available: php-5.3.3-3.el6_2.5.i686 (rhel-i386-server-6) php = 5.3.3-3.el6_2.5 Available: php-5.3.3-3.el6_2.6.i686 (rhel-i386-server-6) php = 5.3.3-3.el6_2.6 Available: php-5.3.3-3.el6_2.8.i686 (rhel-i386-server-6) php = 5.3.3-3.el6_2.8 And to be honest, i'm not sure how to resolve that, presumably it has something to do with the version of php we have installed, but i'm not sure what I need to do to fix it. These are our details: Red Hat Enterprise Linux Server release 6.1 (Santiago) [32bit] PHP 5.3.3 Could anyone please advise me as to either: a) what I need to do to resolve that issue and get php-devel installed from that repo OR b) point me in the direction of another repo which will allow me to easily install php-devel for our server Thank you.
php, redhat, pecl, oracle-call-interface
3
17,312
1
https://stackoverflow.com/questions/12280872/installing-php-devel-on-rhel6-php-5-3-3
12,131,882
Finding Perl modules already packaged for Debian and Redhat
I'm investigating making a Perl application that uses many modules into either a Debian and/or Redhat package. Currently, I believe the 'cleanest' way to do this is to cite, where possible, the modules that are packaged already for the given distribution. The alternative would be to use CPAN and probably have some duplications, problems with @INC etc. However, I can find or interrogate a list of Debian packages here: [URL] but I can't currently find an equivalent for Redhat/Fedora. Also I don't really know whether cpan2deb is authoritative and up to date. If there's another clean way to do this, I'd welcome any other ideas too.
Finding Perl modules already packaged for Debian and Redhat I'm investigating making a Perl application that uses many modules into either a Debian and/or Redhat package. Currently, I believe the 'cleanest' way to do this is to cite, where possible, the modules that are packaged already for the given distribution. The alternative would be to use CPAN and probably have some duplications, problems with @INC etc. However, I can find or interrogate a list of Debian packages here: [URL] but I can't currently find an equivalent for Redhat/Fedora. Also I don't really know whether cpan2deb is authoritative and up to date. If there's another clean way to do this, I'd welcome any other ideas too.
perl, package, debian, redhat
3
458
3
https://stackoverflow.com/questions/12131882/finding-perl-modules-already-packaged-for-debian-and-redhat
10,395,866
error while loading shared libraries: libcudart.so.4: cannot open shared object file: No such file or directory
I am trying to execute MPI and CUDA code on a cluster. The code works fine on single machine but when I try to execute it on cluster I get error: error while loading shared libraries: libcudart.so.4: cannot open shared object file: No such file or directory I checked my PATH and LD_PATH and it looks ok. I have a .bashrc file which contains following entries - export PATH=$PATH:/usr/local/lib/:/usr/local/lib/openmpi:/usr/local/cuda/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:/usr/local/ lib/openmpi/:/usr/local/cuda/lib All the machines haves same installation of CUDA and OpenMPI. I also have /usr/local/cuda/lib in /etc/ld.so.conf Can anyone help me with this. This problem is really annoying. Thanks.
error while loading shared libraries: libcudart.so.4: cannot open shared object file: No such file or directory I am trying to execute MPI and CUDA code on a cluster. The code works fine on single machine but when I try to execute it on cluster I get error: error while loading shared libraries: libcudart.so.4: cannot open shared object file: No such file or directory I checked my PATH and LD_PATH and it looks ok. I have a .bashrc file which contains following entries - export PATH=$PATH:/usr/local/lib/:/usr/local/lib/openmpi:/usr/local/cuda/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:/usr/local/ lib/openmpi/:/usr/local/cuda/lib All the machines haves same installation of CUDA and OpenMPI. I also have /usr/local/cuda/lib in /etc/ld.so.conf Can anyone help me with this. This problem is really annoying. Thanks.
c, linux, cuda, redhat
3
14,460
1
https://stackoverflow.com/questions/10395866/error-while-loading-shared-libraries-libcudart-so-4-cannot-open-shared-object
9,193,697
AVX-optimized code not running on linux redhat 5.6
I have some simple test code which I am trying to generate AVX optimized code for using the icc v12.1 on linux Redhat 5.6. The code looks like this: int main() { double sum = 0.0; for (unsigned int i = 0; i < 1024; i++) { sum += static_cast<double>(i); } std::cout << "Sum: "<< sum << std::endl; return 0; } And I compile it with (and the vector report says that the loop was vectorized): icc -xavx -vec-report1 main.cpp When I run the code I get the following error: Fatal Error: This program was not built to run in your system. Please verify that both the operating system and the processor support Intel(R) AVX. I am certain that the processor is AVX-capable, but does anyone else have problem with AVX on Redhat 5.6?
AVX-optimized code not running on linux redhat 5.6 I have some simple test code which I am trying to generate AVX optimized code for using the icc v12.1 on linux Redhat 5.6. The code looks like this: int main() { double sum = 0.0; for (unsigned int i = 0; i < 1024; i++) { sum += static_cast<double>(i); } std::cout << "Sum: "<< sum << std::endl; return 0; } And I compile it with (and the vector report says that the loop was vectorized): icc -xavx -vec-report1 main.cpp When I run the code I get the following error: Fatal Error: This program was not built to run in your system. Please verify that both the operating system and the processor support Intel(R) AVX. I am certain that the processor is AVX-capable, but does anyone else have problem with AVX on Redhat 5.6?
linux, redhat, icc, avx
3
2,486
1
https://stackoverflow.com/questions/9193697/avx-optimized-code-not-running-on-linux-redhat-5-6
8,143,829
How to Exclude /usr/include Path from Linux Application?
I am running into a problem I have not been able to avoid. Redhat 6 (or most linux packages) comes with a default QT package installed with headers/etc in the /usr/lib and /usr/include folders. Now, I am wanting to link against a newer version of QT without removing the older version. Unfortunately, since the headers are in the /include/ folder, gcc automatically finds them, and then uses the wrong include files (instead of those which I have elsewhere). I cannot seem to stop the compiler from automatically doing this. I have gotten around it previously by simply manually removing the old libraries/headers but this is a terrible solution long term. I do not think this problem is specific to QT either, it just happens to be my current instance of it. Any suggestions? Many thanks :)
How to Exclude /usr/include Path from Linux Application? I am running into a problem I have not been able to avoid. Redhat 6 (or most linux packages) comes with a default QT package installed with headers/etc in the /usr/lib and /usr/include folders. Now, I am wanting to link against a newer version of QT without removing the older version. Unfortunately, since the headers are in the /include/ folder, gcc automatically finds them, and then uses the wrong include files (instead of those which I have elsewhere). I cannot seem to stop the compiler from automatically doing this. I have gotten around it previously by simply manually removing the old libraries/headers but this is a terrible solution long term. I do not think this problem is specific to QT either, it just happens to be my current instance of it. Any suggestions? Many thanks :)
c++, linux, qt, redhat
3
5,907
2
https://stackoverflow.com/questions/8143829/how-to-exclude-usr-include-path-from-linux-application
68,543,425
start pod with root privilege on OpenShift
I have an image that requires root privilege to start. Now I'm trying to deploy it on OpenShift. this is the deployment yaml I used to deploy it apiVersion: apps/v1 kind: Deployment metadata: name: xyz annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf spec: selector: matchLabels: name: xyz template: metadata: labels: name: xyz spec: containers: - name: xyz image: 172.30.1.1:5000/myproject/xyz@sha256:bf3d219941ec0de7f52f6babbca23e03cc2611d327552b08f530ead9ec627ec2 imagePullPolicy: Always securityContext: capabilities: add: - ALL privileged: false allowPrivilegeEscalation: false runAsUser: 0 serviceAccount: runasanyuid serviceAccountName: runasanyuid hostNetwork: true resources: limits: memory: "12000Mi" requests: memory: "6000Mi" ports: - containerPort: 2102 command: - /usr/sbin/sshd -D please note that I already created a SCC called 'scc-admin' to run the pods in the project I'm working on with any UID, as I know that OpenShift doesn't allow pods to start with root privilege by default. kind: SecurityContextConstraints apiVersion: v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - developer groups: - developer that's what I found on the internet as a solution for my issue, but I guess it didn't work as well :( [root@centos72_base ~]# oc get scc NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid true [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret] hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret] hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret] hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk portworxVolume projected quobyte rbd scaleIO secret storageOS vsphere] [root@centos72_base ~]# please also note that this image works fine with docker using the below command docker run -d --network host --privileged --cap-add=ALL --security-opt seccomp=unconfined --name xyz 172.30.1.1:5000/myproject/xyz /usr/sbin/sshd -D [root@centos72_base ~]# docker ps | grep xyz 793e339ff732 172.30.1.1:5000/myproject/xyz "/usr/sbin/sshd -D" About a minute ago Up About a minute xyz and on OpenShift i get these errors with the deployment file I provided above Error creating: pods "xyz-7966f58588-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000140000, 1000149999] capabilities.add: Invalid value: "ALL": capability may not be added] which means that i have to remove capabilities: add: - ALL and runAsUser: 0 to start the pod and when I remove them from the yaml file, I get a crash loopback error from the pod so can anyone please help me with that
start pod with root privilege on OpenShift I have an image that requires root privilege to start. Now I'm trying to deploy it on OpenShift. this is the deployment yaml I used to deploy it apiVersion: apps/v1 kind: Deployment metadata: name: xyz annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf spec: selector: matchLabels: name: xyz template: metadata: labels: name: xyz spec: containers: - name: xyz image: 172.30.1.1:5000/myproject/xyz@sha256:bf3d219941ec0de7f52f6babbca23e03cc2611d327552b08f530ead9ec627ec2 imagePullPolicy: Always securityContext: capabilities: add: - ALL privileged: false allowPrivilegeEscalation: false runAsUser: 0 serviceAccount: runasanyuid serviceAccountName: runasanyuid hostNetwork: true resources: limits: memory: "12000Mi" requests: memory: "6000Mi" ports: - containerPort: 2102 command: - /usr/sbin/sshd -D please note that I already created a SCC called 'scc-admin' to run the pods in the project I'm working on with any UID, as I know that OpenShift doesn't allow pods to start with root privilege by default. kind: SecurityContextConstraints apiVersion: v1 metadata: name: scc-admin allowPrivilegedContainer: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny fsGroup: type: RunAsAny supplementalGroups: type: RunAsAny users: - developer groups: - developer that's what I found on the internet as a solution for my issue, but I guess it didn't work as well :( [root@centos72_base ~]# oc get scc NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES anyuid true [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret] hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret] hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret] hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*] restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret] scc-admin true [] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [awsElasticBlockStore azureDisk azureFile cephFS cinder configMap downwardAPI emptyDir fc flexVolume flocker gcePersistentDisk gitRepo glusterfs iscsi nfs persistentVolumeClaim photonPersistentDisk portworxVolume projected quobyte rbd scaleIO secret storageOS vsphere] [root@centos72_base ~]# please also note that this image works fine with docker using the below command docker run -d --network host --privileged --cap-add=ALL --security-opt seccomp=unconfined --name xyz 172.30.1.1:5000/myproject/xyz /usr/sbin/sshd -D [root@centos72_base ~]# docker ps | grep xyz 793e339ff732 172.30.1.1:5000/myproject/xyz "/usr/sbin/sshd -D" About a minute ago Up About a minute xyz and on OpenShift i get these errors with the deployment file I provided above Error creating: pods "xyz-7966f58588-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000140000, 1000149999] capabilities.add: Invalid value: "ALL": capability may not be added] which means that i have to remove capabilities: add: - ALL and runAsUser: 0 to start the pod and when I remove them from the yaml file, I get a crash loopback error from the pod so can anyone please help me with that
kubernetes, openshift, redhat, openshift-origin, openshift-3
3
17,031
2
https://stackoverflow.com/questions/68543425/start-pod-with-root-privilege-on-openshift
54,899,683
What&#39;s the difference between PAM auth interface and account interface
In a Redhat documentation (found in this link), under the section PAM Module Interfaces it states that auth interface of a PAM module is used for authenticating the use. And the use of account interface is for verifying the access is allowed or not. Is there a clear difference between these two interfaces or can they be used instead of one another?
What&#39;s the difference between PAM auth interface and account interface In a Redhat documentation (found in this link), under the section PAM Module Interfaces it states that auth interface of a PAM module is used for authenticating the use. And the use of account interface is for verifying the access is allowed or not. Is there a clear difference between these two interfaces or can they be used instead of one another?
linux, authentication, redhat, pam
3
1,828
1
https://stackoverflow.com/questions/54899683/whats-the-difference-between-pam-auth-interface-and-account-interface
52,090,575
Calculate the 5 minutes ceiling
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo) I want to round the time to the nearest 5 minutes, only up, not down, for example: 08:09:15 should be 08:10:00 08:11:26 should be 08:15:00 08:17:58 should be 08:20:00 I have been trying with: (date -d @$(( (($(date +%s) + 150) / 300) * 300)) "+%H:%M:%S") This will round the time but also down (08:11:18 will result in 08:10:00 and not 08:15:00) Any idea how i can achieve this?
Calculate the 5 minutes ceiling Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo) I want to round the time to the nearest 5 minutes, only up, not down, for example: 08:09:15 should be 08:10:00 08:11:26 should be 08:15:00 08:17:58 should be 08:20:00 I have been trying with: (date -d @$(( (($(date +%s) + 150) / 300) * 300)) "+%H:%M:%S") This will round the time but also down (08:11:18 will result in 08:10:00 and not 08:15:00) Any idea how i can achieve this?
linux, bash, date, redhat
3
170
2
https://stackoverflow.com/questions/52090575/calculate-the-5-minutes-ceiling
45,730,596
Compile with recent gcc on RHEL6: How to distribute the software?
My software compiles on a variety of OSes, including RHEL7. I have a request to build it to run on RHEL6. My problem is that my C++ code relies a lot on C++11 features that are not present in gcc-4.4, the one coming with RHEL6. I've seen there are ways to have more recent gcc versions to run on RHEL6, such as the Developer ToolSet for instance. I've no doubt I'll be able to build my software for RHEL6. However, once compiled with, say, gcc-6, what will I have to provide with the binaries of my software? The C library of gcc-6? The C++ library of gcc-6? Should I instead link them statically to my binary? On top of that, for RHEL, my software is packaged into .rpm files, and installs at standard locations: /usr/bin, /usr/lib ... Where would I install these new C and C++ library files on the target system? (Obviously not in /usr/lib where they may interfere with the default ones!) Edit: My software is a shared object, I guess I can statically link the C++ library? But what about the program (I've no control on it) that will use my shared object. Can it use another version of the C++ library? Won't the linker find lots of duplicates? Looks like I'd open a can of worms... Edit: Would it be possible to use the more recent gcc compiler with the standard C++ library of the RHEL6 stock one?
Compile with recent gcc on RHEL6: How to distribute the software? My software compiles on a variety of OSes, including RHEL7. I have a request to build it to run on RHEL6. My problem is that my C++ code relies a lot on C++11 features that are not present in gcc-4.4, the one coming with RHEL6. I've seen there are ways to have more recent gcc versions to run on RHEL6, such as the Developer ToolSet for instance. I've no doubt I'll be able to build my software for RHEL6. However, once compiled with, say, gcc-6, what will I have to provide with the binaries of my software? The C library of gcc-6? The C++ library of gcc-6? Should I instead link them statically to my binary? On top of that, for RHEL, my software is packaged into .rpm files, and installs at standard locations: /usr/bin, /usr/lib ... Where would I install these new C and C++ library files on the target system? (Obviously not in /usr/lib where they may interfere with the default ones!) Edit: My software is a shared object, I guess I can statically link the C++ library? But what about the program (I've no control on it) that will use my shared object. Can it use another version of the C++ library? Won't the linker find lots of duplicates? Looks like I'd open a can of worms... Edit: Would it be possible to use the more recent gcc compiler with the standard C++ library of the RHEL6 stock one?
c++, linux, gcc, redhat, software-distribution
3
677
3
https://stackoverflow.com/questions/45730596/compile-with-recent-gcc-on-rhel6-how-to-distribute-the-software
37,834,408
Compiling SOCAT on Redhat
I am trying to install SOCAT and I am quite lite on C++. So following the instructions HERE I am able to get the latest stable version of 1.7.3.1 downloaded, I get through the ./configure , but when I go into ./make I get the following error: nestlex.c:14:7: error: unknown type name ‘ptrdiff_t’ ptrdiff_t *len, ^ nestlex.c: In function ‘nestlex’: nestlex.c:48:7: warning: implicit declaration of function ‘_nestlex’ [-Wimplicit-function-declaration] _nestlex(addr, token, (ptrdiff_t *)len, ends, hquotes, squotes, nests, ^ nestlex.c:48:30: error: ‘ptrdiff_t’ undeclared (first use in this function) _nestlex(addr, token, (ptrdiff_t *)len, ends, hquotes, squotes, nests, ^ nestlex.c:48:30: note: each undeclared identifier is reported only once for each function it appears in nestlex.c:48:41: error: expected expression before ‘)’ token _nestlex(addr, token, (ptrdiff_t *)len, ends, hquotes, squotes, nests, ^ nestlex.c: At top level: nestlex.c:54:7: error: unknown type name ‘ptrdiff_t’ ptrdiff_t *len, ^ nestlex.c: In function ‘nestlex’: nestlex.c:50:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ make: *** [nestlex.o] Error 1 System Information: cat system-release Red Hat Enterprise Linux Server release 7.2 (Maipo) rpm -qa |grep gcc libgcc-4.8.5-4.el7.x86_64 gcc-4.8.5-4.el7.x86_64 rpm -qa |grep glibc glibc-common-2.17-106.el7_2.6.x86_64 glibc-2.17-106.el7_2.6.x86_64 glibc-devel-2.17-106.el7_2.6.x86_64 glibc-headers-2.17-106.el7_2.6.x86_64 rpm -qa |grep gd gdisk-0.8.6-5.el7.x86_64 gd-2.0.35-26.el7.x86_64 gdbm-1.10-8.el7.x86_64 I am not sure where to go from here, as I am fairly new to having to install from source. I have found a few articles describing the problem as not having the correct version of the headers installed. If someone could point me in the right direction, I would greatly appreciate it. Thanks in advance.
Compiling SOCAT on Redhat I am trying to install SOCAT and I am quite lite on C++. So following the instructions HERE I am able to get the latest stable version of 1.7.3.1 downloaded, I get through the ./configure , but when I go into ./make I get the following error: nestlex.c:14:7: error: unknown type name ‘ptrdiff_t’ ptrdiff_t *len, ^ nestlex.c: In function ‘nestlex’: nestlex.c:48:7: warning: implicit declaration of function ‘_nestlex’ [-Wimplicit-function-declaration] _nestlex(addr, token, (ptrdiff_t *)len, ends, hquotes, squotes, nests, ^ nestlex.c:48:30: error: ‘ptrdiff_t’ undeclared (first use in this function) _nestlex(addr, token, (ptrdiff_t *)len, ends, hquotes, squotes, nests, ^ nestlex.c:48:30: note: each undeclared identifier is reported only once for each function it appears in nestlex.c:48:41: error: expected expression before ‘)’ token _nestlex(addr, token, (ptrdiff_t *)len, ends, hquotes, squotes, nests, ^ nestlex.c: At top level: nestlex.c:54:7: error: unknown type name ‘ptrdiff_t’ ptrdiff_t *len, ^ nestlex.c: In function ‘nestlex’: nestlex.c:50:1: warning: control reaches end of non-void function [-Wreturn-type] } ^ make: *** [nestlex.o] Error 1 System Information: cat system-release Red Hat Enterprise Linux Server release 7.2 (Maipo) rpm -qa |grep gcc libgcc-4.8.5-4.el7.x86_64 gcc-4.8.5-4.el7.x86_64 rpm -qa |grep glibc glibc-common-2.17-106.el7_2.6.x86_64 glibc-2.17-106.el7_2.6.x86_64 glibc-devel-2.17-106.el7_2.6.x86_64 glibc-headers-2.17-106.el7_2.6.x86_64 rpm -qa |grep gd gdisk-0.8.6-5.el7.x86_64 gd-2.0.35-26.el7.x86_64 gdbm-1.10-8.el7.x86_64 I am not sure where to go from here, as I am fairly new to having to install from source. I have found a few articles describing the problem as not having the correct version of the headers installed. If someone could point me in the right direction, I would greatly appreciate it. Thanks in advance.
c++, gcc, redhat, llvm-gcc
3
2,605
2
https://stackoverflow.com/questions/37834408/compiling-socat-on-redhat
32,866,961
Joomla 3.4.3 - configuration.php not writable
Using MariaDB, Apache, PHP 5.4.x, RHEL 7 How do I allow configuration.php to be written to? Installed Joomla several times, always ending up with the configuration.php file not writable. I proceeded, copied the config content and created a new php file, placed it where Joomla lives, opened up the permissions, changed to apache:apache, still nothing. I've referenced several articles, notably this one: Installing Joomla 3 Error: Your configuration file or directory is not writable I have also tried creating an empty configuration.php file and placing it in the joomla root, opening up permissions - didnt work. My current state of installation is configuration.php file in place, but unable to remove the installation directory via the web installer (assuming because I shoehorned the config file into place and still not being writable). I've tried several permissions setups then attempting to remove the install directory without success. Manually removing the install directory via rm -r only yields a totally inaccessible site forcing me to wipe my joomla files, databases, and install again. Thanks in advance.
Joomla 3.4.3 - configuration.php not writable Using MariaDB, Apache, PHP 5.4.x, RHEL 7 How do I allow configuration.php to be written to? Installed Joomla several times, always ending up with the configuration.php file not writable. I proceeded, copied the config content and created a new php file, placed it where Joomla lives, opened up the permissions, changed to apache:apache, still nothing. I've referenced several articles, notably this one: Installing Joomla 3 Error: Your configuration file or directory is not writable I have also tried creating an empty configuration.php file and placing it in the joomla root, opening up permissions - didnt work. My current state of installation is configuration.php file in place, but unable to remove the installation directory via the web installer (assuming because I shoehorned the config file into place and still not being writable). I've tried several permissions setups then attempting to remove the install directory without success. Manually removing the install directory via rm -r only yields a totally inaccessible site forcing me to wipe my joomla files, databases, and install again. Thanks in advance.
php, apache, joomla, configuration, redhat
3
6,567
3
https://stackoverflow.com/questions/32866961/joomla-3-4-3-configuration-php-not-writable
31,710,990
Unable to install Compass on RedHat 7
I am trying to install compass on RedHat 7. I have ruby version ruby 2.0.0p598 (2014-11-13) [x86_64-linux] installed. I am executing the following commands: sudo yum insatll ruby sudo yum install gcc gcc-c++ make automake autoconf curl-devel openssl-devel zlib-devel httpd-devel apr-devel apr-util-devel sqlite-devel sudo gem install compass --http-proxy [URL] I am getting the following error: Building native extensions. This could take a while... ERROR: Error installing compass: ERROR: Failed to build gem native extension. /usr/bin/ruby -r ./siteconf20150729-6603-73q6zu.rb extconf.rb mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h extconf failed, exit code 1 Gem files will remain installed in /usr/local/share/gems/gems/ffi-1.9.10 for inspection. Results logged to /usr/local/lib64/gems/ruby/ffi-1.9.10/gem_make.out Not sure how to fix this. Before installing compass I even tried sudo gem update --system Still the same error. Then I tried updating ruby to 2.2.2 but still the same error. The gem version is 2.0.14
Unable to install Compass on RedHat 7 I am trying to install compass on RedHat 7. I have ruby version ruby 2.0.0p598 (2014-11-13) [x86_64-linux] installed. I am executing the following commands: sudo yum insatll ruby sudo yum install gcc gcc-c++ make automake autoconf curl-devel openssl-devel zlib-devel httpd-devel apr-devel apr-util-devel sqlite-devel sudo gem install compass --http-proxy [URL] I am getting the following error: Building native extensions. This could take a while... ERROR: Error installing compass: ERROR: Failed to build gem native extension. /usr/bin/ruby -r ./siteconf20150729-6603-73q6zu.rb extconf.rb mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h extconf failed, exit code 1 Gem files will remain installed in /usr/local/share/gems/gems/ffi-1.9.10 for inspection. Results logged to /usr/local/lib64/gems/ruby/ffi-1.9.10/gem_make.out Not sure how to fix this. Before installing compass I even tried sudo gem update --system Still the same error. Then I tried updating ruby to 2.2.2 but still the same error. The gem version is 2.0.14
ruby, linux, redhat, compass
3
1,891
1
https://stackoverflow.com/questions/31710990/unable-to-install-compass-on-redhat-7
25,434,429
How to increase the TEMP TABLE Space value in Oracle?
Currently my Oracle 11g temp TABLESPACE value is 34GB. I need to increase the table space value to a large value (45GB) I tired the following sql command to increase the temp table space. ALTER TABLESPACE temp ADD TEMPFILE '/oradata/temp01.dbf' SIZE 45G The error: SQL Error: ORA-01144: File size (5536951 blocks) exceeds maximum of 4194303 blocks 01144. 00000 - "File size (%s blocks) exceeds maximum of %s blocks" *Cause: Specified file size is larger than maximum allowable size value. *Action: Specify a smaller size. SELECT value FROM v$parameter WHERE name = 'db_block_size'; The "db_block_size" value is 8192 How do I decide the maximum allowed db_block_size and the corresponding temp TABLESPACE value How do I increase the TEMP tablespace?
How to increase the TEMP TABLE Space value in Oracle? Currently my Oracle 11g temp TABLESPACE value is 34GB. I need to increase the table space value to a large value (45GB) I tired the following sql command to increase the temp table space. ALTER TABLESPACE temp ADD TEMPFILE '/oradata/temp01.dbf' SIZE 45G The error: SQL Error: ORA-01144: File size (5536951 blocks) exceeds maximum of 4194303 blocks 01144. 00000 - "File size (%s blocks) exceeds maximum of %s blocks" *Cause: Specified file size is larger than maximum allowable size value. *Action: Specify a smaller size. SELECT value FROM v$parameter WHERE name = 'db_block_size'; The "db_block_size" value is 8192 How do I decide the maximum allowed db_block_size and the corresponding temp TABLESPACE value How do I increase the TEMP tablespace?
oracle-database, oracle11g, redhat, tablespace
3
36,699
1
https://stackoverflow.com/questions/25434429/how-to-increase-the-temp-table-space-value-in-oracle
25,099,702
Amazon Web Service EC2 instance RedHat yum does not work
After setting up a fresh ec2 instance, I tried to install vim using yum... I got this error: ERROR: can not find RHNS CA file: /usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT
Amazon Web Service EC2 instance RedHat yum does not work After setting up a fresh ec2 instance, I tried to install vim using yum... I got this error: ERROR: can not find RHNS CA file: /usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT
amazon-ec2, redhat, yum
3
3,444
3
https://stackoverflow.com/questions/25099702/amazon-web-service-ec2-instance-redhat-yum-does-not-work
9,684,556
How to install pysvn on Redhat Enterprise Linux 6.0?
I tried to install pysvn on my server today, but met some problems as below: [root@coffish pysvn-1.7.6]# python setup.py install running install running bdist_egg running egg_info writing pysvn.egg-info/PKG-INFO writing top-level names to pysvn.egg-info/top_level.txt writing dependency_links to pysvn.egg-info/dependency_links.txt reading manifest file 'pysvn.egg-info/SOURCES.txt' writing manifest file 'pysvn.egg-info/SOURCES.txt' Info: Configure for python 2.6.5 in exec_prefix /usr Info: Found PyCXX include in /usr/local/src/pysvn-1.7.6/Import/pycxx-6.2.4 Info: Found PyCXX include in /usr/local/src/pysvn-1.7.6/Import/pycxx-6.2.4 Info: Found PyCXX Source in /usr/local/src/pysvn-1.7.6/Import/pycxx-6.2.4/Src ('Error:', 'cannot find SVN include svn_client.h - use --svn-inc-dir') make: *** No rule to make target clean'. Stop. make: *** No targets. Stop. make: *** No rule to make target egg'. Stop. error: Not a URL, existing file, or requirement spec: 'dist/pysvn-1.7.6-py2.6-linux- i686.egg' I also tried to find a svn_client.h file and placed it on current directory, but it didn't work. It is suggested that subversion client package be downloaded. But what is the subversion client package? How can I solve this problem.
How to install pysvn on Redhat Enterprise Linux 6.0? I tried to install pysvn on my server today, but met some problems as below: [root@coffish pysvn-1.7.6]# python setup.py install running install running bdist_egg running egg_info writing pysvn.egg-info/PKG-INFO writing top-level names to pysvn.egg-info/top_level.txt writing dependency_links to pysvn.egg-info/dependency_links.txt reading manifest file 'pysvn.egg-info/SOURCES.txt' writing manifest file 'pysvn.egg-info/SOURCES.txt' Info: Configure for python 2.6.5 in exec_prefix /usr Info: Found PyCXX include in /usr/local/src/pysvn-1.7.6/Import/pycxx-6.2.4 Info: Found PyCXX include in /usr/local/src/pysvn-1.7.6/Import/pycxx-6.2.4 Info: Found PyCXX Source in /usr/local/src/pysvn-1.7.6/Import/pycxx-6.2.4/Src ('Error:', 'cannot find SVN include svn_client.h - use --svn-inc-dir') make: *** No rule to make target clean'. Stop. make: *** No targets. Stop. make: *** No rule to make target egg'. Stop. error: Not a URL, existing file, or requirement spec: 'dist/pysvn-1.7.6-py2.6-linux- i686.egg' I also tried to find a svn_client.h file and placed it on current directory, but it didn't work. It is suggested that subversion client package be downloaded. But what is the subversion client package? How can I solve this problem.
installation, redhat, pysvn
3
6,251
3
https://stackoverflow.com/questions/9684556/how-to-install-pysvn-on-redhat-enterprise-linux-6-0
2,426,596
Mono : Is it possible to host a web/wcf service from console application?
I know with .NET we can host wcf service from a console application without the need of webservers like IIS or apache. Is it possible to do the same with Mono 2.6.1 on a RHEL 5 or CentOS? Any links to any documentation will be highly helpful.
Mono : Is it possible to host a web/wcf service from console application? I know with .NET we can host wcf service from a console application without the need of webservers like IIS or apache. Is it possible to do the same with Mono 2.6.1 on a RHEL 5 or CentOS? Any links to any documentation will be highly helpful.
linux, wcf, mono, centos, redhat
3
2,678
1
https://stackoverflow.com/questions/2426596/mono-is-it-possible-to-host-a-web-wcf-service-from-console-application
70,841,540
.NET Core apps on Linux (Redhat) creating mysterious &quot;.net&quot; directories/files in user home directories
We have a series of .NET Core console apps that are installed in a particular directory: /users/apps/app1/MyApp1 /users/apps/app2/MyApp2 etc... The apps run fine. However, we have a problem where the .NET runtime seems to place some files in a ".net" folder in the current user's home directory. unclejoe@myhost::/home/unclejoe> ls -la total 40 drwx------. 8 unclejoe mygroup 139 Jan 24 14:42 . drwxr-xr-x. 90 root root 4096 Jan 21 15:29 .. -rw-------. 1 unclejoe mygroup 15510 Jan 24 14:42 .bash_history drwx------. 10 unclejoe mygroup 190 Jan 24 01:42 .net Within the .net folder, we see a bunch of seemingly temp folders: [unclejoe@myhost .net]$ ls -la total 4 drwx------. 10 unclejoe mygroup 190 Jan 24 01:42 . drwx------. 14 unclejoe mygroup 4096 Jan 23 16:28 .. drwx------. 4 unclejoe mygroup 46 Jan 24 12:09 MyApp1 drwx------. 5 unclejoe mygroup 66 Jan 24 01:42 MyApp2 Drilling further: [unclejoe@myhost MyApp1]$ ls -la total 24 drwx------. 4 unclejoe mygroup 46 Jan 24 12:09 . drwx------. 10 unclejoe mygroup 190 Jan 24 01:42 .. drwx------. 2 unclejoe mygroup 8192 Jan 24 01:42 cz1zui3n.uma drwx------. 2 unclejoe mygroup 8192 Jan 24 12:09 pvwttlkm.z4s Drilling furthest: [unclejoe@myhost MyApp1]$ cd cz1zui3n.uma [unclejoe@myhost cz1zui3n.uma]$ ls -l total 30808 -rw-r--r--. 1 unclejoe mygroup 330240 Jan 24 01:42 Autofac.dll -rw-r--r--. 1 unclejoe mygroup 16384 Jan 24 01:42 Autofac.Extensions.DependencyInjection.dll -rw-r--r--. 1 unclejoe mygroup 143609 Jan 24 01:42 MyApp1.deps.json -rw-r--r--. 1 unclejoe mygroup 10752 Jan 24 01:42 MyApp1.dll -rw-r--r--. 1 unclejoe mygroup 149 Jan 24 01:42 MyApp1.runtimeconfig.json -rw-r--r--. 1 unclejoe mygroup 27136 Jan 24 01:42 Common.dll The problem is we don't expect these artifacts (dlls/app binaries) to be pushed here as its eating up a lot of space over time, especially when these strange temp directories get created (and never cleaned up on its own). We do not specify any environment variable in our .NET code to point to this home location. Question: Do you know what's causing these directories and files to get created? It appears to get created when the app runs after some period of time. Any areas that we should be checking to identify root cause? Thanks!
.NET Core apps on Linux (Redhat) creating mysterious &quot;.net&quot; directories/files in user home directories We have a series of .NET Core console apps that are installed in a particular directory: /users/apps/app1/MyApp1 /users/apps/app2/MyApp2 etc... The apps run fine. However, we have a problem where the .NET runtime seems to place some files in a ".net" folder in the current user's home directory. unclejoe@myhost::/home/unclejoe> ls -la total 40 drwx------. 8 unclejoe mygroup 139 Jan 24 14:42 . drwxr-xr-x. 90 root root 4096 Jan 21 15:29 .. -rw-------. 1 unclejoe mygroup 15510 Jan 24 14:42 .bash_history drwx------. 10 unclejoe mygroup 190 Jan 24 01:42 .net Within the .net folder, we see a bunch of seemingly temp folders: [unclejoe@myhost .net]$ ls -la total 4 drwx------. 10 unclejoe mygroup 190 Jan 24 01:42 . drwx------. 14 unclejoe mygroup 4096 Jan 23 16:28 .. drwx------. 4 unclejoe mygroup 46 Jan 24 12:09 MyApp1 drwx------. 5 unclejoe mygroup 66 Jan 24 01:42 MyApp2 Drilling further: [unclejoe@myhost MyApp1]$ ls -la total 24 drwx------. 4 unclejoe mygroup 46 Jan 24 12:09 . drwx------. 10 unclejoe mygroup 190 Jan 24 01:42 .. drwx------. 2 unclejoe mygroup 8192 Jan 24 01:42 cz1zui3n.uma drwx------. 2 unclejoe mygroup 8192 Jan 24 12:09 pvwttlkm.z4s Drilling furthest: [unclejoe@myhost MyApp1]$ cd cz1zui3n.uma [unclejoe@myhost cz1zui3n.uma]$ ls -l total 30808 -rw-r--r--. 1 unclejoe mygroup 330240 Jan 24 01:42 Autofac.dll -rw-r--r--. 1 unclejoe mygroup 16384 Jan 24 01:42 Autofac.Extensions.DependencyInjection.dll -rw-r--r--. 1 unclejoe mygroup 143609 Jan 24 01:42 MyApp1.deps.json -rw-r--r--. 1 unclejoe mygroup 10752 Jan 24 01:42 MyApp1.dll -rw-r--r--. 1 unclejoe mygroup 149 Jan 24 01:42 MyApp1.runtimeconfig.json -rw-r--r--. 1 unclejoe mygroup 27136 Jan 24 01:42 Common.dll The problem is we don't expect these artifacts (dlls/app binaries) to be pushed here as its eating up a lot of space over time, especially when these strange temp directories get created (and never cleaned up on its own). We do not specify any environment variable in our .NET code to point to this home location. Question: Do you know what's causing these directories and files to get created? It appears to get created when the app runs after some period of time. Any areas that we should be checking to identify root cause? Thanks!
c#, .net, linux, redhat
3
501
1
https://stackoverflow.com/questions/70841540/net-core-apps-on-linux-redhat-creating-mysterious-net-directories-files-in
62,757,229
How do I resolve this error &#39;cannot find -lc&#39;?
How do I compile this code in CentOS 7 ? I am reading one book and in the book they use -static while compiling so that's how I did it and I get errors I mentioned below but when I dont use -static I get no errors and it compiles successfully. First attempt: main() { exit(0); } I get this error when I try to compile it. $ gcc -static -o exit exit.c exit.c: In function _main_: exit.c:3:9: warning: incompatible implicit declaration of built-in function _exit_ [enabled by default] exit(0); ^ /usr/bin/ld: cannot find -lc collect2: error: ld returned 1 exit status Second attempt: Then I google this error and lots of articles told me to include stdlib.h library so I did that as well and I get this error: Code: #include <stdlib.h> main() { exit(0); } Now when I compile it, I get following error. $ gcc -static -o exit exit.c /usr/bin/ld: cannot find -lc collect2: error: ld returned 1 exit status linux version: $ uname -a Linux localhost.localdomain 3.10.0-1127.13.1.el7.centos.plus.i686 #1 SMP Thu Jun 25 16:59:06 UTC 2020 i686 i686 i386 GNU/Linux
How do I resolve this error &#39;cannot find -lc&#39;? How do I compile this code in CentOS 7 ? I am reading one book and in the book they use -static while compiling so that's how I did it and I get errors I mentioned below but when I dont use -static I get no errors and it compiles successfully. First attempt: main() { exit(0); } I get this error when I try to compile it. $ gcc -static -o exit exit.c exit.c: In function _main_: exit.c:3:9: warning: incompatible implicit declaration of built-in function _exit_ [enabled by default] exit(0); ^ /usr/bin/ld: cannot find -lc collect2: error: ld returned 1 exit status Second attempt: Then I google this error and lots of articles told me to include stdlib.h library so I did that as well and I get this error: Code: #include <stdlib.h> main() { exit(0); } Now when I compile it, I get following error. $ gcc -static -o exit exit.c /usr/bin/ld: cannot find -lc collect2: error: ld returned 1 exit status linux version: $ uname -a Linux localhost.localdomain 3.10.0-1127.13.1.el7.centos.plus.i686 #1 SMP Thu Jun 25 16:59:06 UTC 2020 i686 i686 i386 GNU/Linux
c, linux, gcc, centos, redhat
3
3,313
1
https://stackoverflow.com/questions/62757229/how-do-i-resolve-this-error-cannot-find-lc
57,743,479
How to find which part of my Rust project uses GLIBC 2.18
With cargo-tree , I can see my project depends on libc v0.2.62 $ cargo tree -p libc -i | grep libc libc v0.2.62 But it actually requires two versions GLIB_2.14 and GLIBC2.18 . ldd error messages are as follows: /lib64/libc.so.6: version GLIBC_2.18' not found /lib64/libc.so.6: version GLIBC_2.14' not found I am able to get GLIBC_2.14 but not GLIBC_2.18 . So I plan to switch to older versions of Rust or some crates I use. I need to find out which one depends on GLIBC_2.18 first. Can anyone help me?
How to find which part of my Rust project uses GLIBC 2.18 With cargo-tree , I can see my project depends on libc v0.2.62 $ cargo tree -p libc -i | grep libc libc v0.2.62 But it actually requires two versions GLIB_2.14 and GLIBC2.18 . ldd error messages are as follows: /lib64/libc.so.6: version GLIBC_2.18' not found /lib64/libc.so.6: version GLIBC_2.14' not found I am able to get GLIBC_2.14 but not GLIBC_2.18 . So I plan to switch to older versions of Rust or some crates I use. I need to find out which one depends on GLIBC_2.18 first. Can anyone help me?
linux, rust, redhat, glibc
3
1,765
1
https://stackoverflow.com/questions/57743479/how-to-find-which-part-of-my-rust-project-uses-glibc-2-18
56,465,589
Kibana - Request Timeout after 30000ms at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15
Fresh installment of Kibana (On redhat 7.6 (64bit) via yum) starts, but is restarting every minute. Before it restarted every 5 seconds, but i fixed it after changing /etc/fstab to allow noexec on /var cause it is needed for /var/lib/kibana/headless_shell-linux/headless_shell . I tried starting Kibana by command instead of systemctl to see full logs: /usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml It gives me output: log [14:59:49.784] [info][status][plugin:kibana@undefined] Status changed from uninitialized to green - Ready log [14:59:49.812] [info][status][plugin:elasticsearch@undefined] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.814] [info][status][plugin:xpack_main@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.820] [info][status][plugin:graph@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.828] [info][status][plugin:monitoring@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.831] [info][status][plugin:spaces@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.841] [warning][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml log [14:59:49.845] [warning][security] Session cookies will be transmitted over insecure connections. This is not recommended. log [14:59:49.851] [info][status][plugin:security@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.872] [info][status][plugin:searchprofiler@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.874] [info][status][plugin:ml@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.907] [info][status][plugin:tilemap@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.909] [info][status][plugin:watcher@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.922] [info][status][plugin:grokdebugger@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.926] [info][status][plugin:dashboard_mode@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.927] [info][status][plugin:logstash@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.932] [info][status][plugin:beats_management@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.945] [info][status][plugin:apm_oss@undefined] Status changed from uninitialized to green - Ready log [14:59:49.956] [info][status][plugin:apm@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.957] [info][status][plugin:tile_map@undefined] Status changed from uninitialized to green - Ready log [14:59:49.959] [info][status][plugin:task_manager@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.961] [info][status][plugin:maps@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.965] [info][status][plugin:interpreter@undefined] Status changed from uninitialized to green - Ready log [14:59:49.972] [info][status][plugin:canvas@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.975] [info][status][plugin:license_management@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.977] [info][status][plugin:cloud@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.978] [info][status][plugin:index_management@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.999] [info][status][plugin:console@undefined] Status changed from uninitialized to green - Ready log [14:59:50.002] [info][status][plugin:console_extensions@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.004] [info][status][plugin:notifications@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.006] [info][status][plugin:index_lifecycle_management@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:50.039] [info][status][plugin:infra@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.041] [info][status][plugin:rollup@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:50.048] [info][status][plugin:remote_clusters@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:50.053] [info][status][plugin:cross_cluster_replication@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:50.061] [info][status][plugin:translations@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.069] [info][status][plugin:upgrade_assistant@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.084] [info][status][plugin:uptime@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.086] [info][status][plugin:oss_telemetry@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.094] [info][status][plugin:metrics@undefined] Status changed from uninitialized to green - Ready log [14:59:50.210] [info][status][plugin:timelion@undefined] Status changed from uninitialized to green - Ready log [14:59:50.507] [info][status][plugin:elasticsearch@undefined] Status changed from yellow to green - Ready log [14:59:50.513] [error][status][plugin:xpack_main@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.514] [error][status][plugin:graph@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.514] [error][status][plugin:spaces@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.514] [error][status][plugin:security@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.515] [error][status][plugin:searchprofiler@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.515] [error][status][plugin:ml@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.515] [error][status][plugin:tilemap@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.515] [error][status][plugin:watcher@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.516] [error][status][plugin:grokdebugger@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.516] [error][status][plugin:logstash@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.516] [error][status][plugin:beats_management@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.516] [error][status][plugin:maps@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:index_management@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:index_lifecycle_management@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:rollup@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:remote_clusters@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:cross_cluster_replication@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.901] [warning][browser-driver][reporting] Enabling the Chromium sandbox provides an additional layer of protection. log [14:59:50.903] [warning][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml log [14:59:50.919] [error][status][plugin:reporting@7.1.1] Status changed from uninitialized to red - [data] Elasticsearch cluster did not respond with license information. error [15:00:20.514] [warning][process] UnhandledPromiseRejectionWarning: Error: Request Timeout after 30000ms at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15 at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7) at ontimeout (timers.js:436:11) at tryOnTimeout (timers.js:300:5) at listOnTimeout (timers.js:263:5) at Timer.processTimers (timers.js:223:10) at emitWarning (internal/process/promises.js:81:15) at emitPromiseRejectionWarnings (internal/process/promises.js:120:9) at process._tickCallback (internal/process/next_tick.js:69:34) error [15:00:20.515] [warning][process] Error: Request Timeout after 30000ms at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15 at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7) at ontimeout (timers.js:436:11) at tryOnTimeout (timers.js:300:5) at listOnTimeout (timers.js:263:5) at Timer.processTimers (timers.js:223:10) log [15:00:20.908] [warning][reporting] Could not retrieve cluster settings, because of Request Timeout after 30000ms log [15:00:20.940] [warning][task_manager] PollError Request Timeout after 30000ms log [15:00:20.942] [warning][maps] Error scheduling telemetry task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized! log [15:00:20.943] [warning][telemetry] Error scheduling task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized! I want Kibana to start without failing. Kibana.yml: server.port: 5601 server.host: "xxx.xx.xx.x" server.name: "elk-log-kibana" elasticsearch.hosts: "[URL] server.basePath: "/kibana" server.rewriteBasePath: true elasticsearch.yml: cluster.name: elk-log-elasticsearch path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch http.port: 9200 network.host: 0.0.0.0 discovery.seed_hosts: 127.0.0.1 Elasticsearch works fine: curl -v [URL] * About to connect() to localhost port 9200 (#0) * Trying ::1... * Connected to localhost (::1) port 9200 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: localhost:9200 > Accept: */* > < HTTP/1.1 200 OK
Kibana - Request Timeout after 30000ms at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15 Fresh installment of Kibana (On redhat 7.6 (64bit) via yum) starts, but is restarting every minute. Before it restarted every 5 seconds, but i fixed it after changing /etc/fstab to allow noexec on /var cause it is needed for /var/lib/kibana/headless_shell-linux/headless_shell . I tried starting Kibana by command instead of systemctl to see full logs: /usr/share/kibana/bin/kibana -c /etc/kibana/kibana.yml It gives me output: log [14:59:49.784] [info][status][plugin:kibana@undefined] Status changed from uninitialized to green - Ready log [14:59:49.812] [info][status][plugin:elasticsearch@undefined] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.814] [info][status][plugin:xpack_main@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.820] [info][status][plugin:graph@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.828] [info][status][plugin:monitoring@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.831] [info][status][plugin:spaces@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.841] [warning][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml log [14:59:49.845] [warning][security] Session cookies will be transmitted over insecure connections. This is not recommended. log [14:59:49.851] [info][status][plugin:security@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.872] [info][status][plugin:searchprofiler@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.874] [info][status][plugin:ml@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.907] [info][status][plugin:tilemap@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.909] [info][status][plugin:watcher@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.922] [info][status][plugin:grokdebugger@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.926] [info][status][plugin:dashboard_mode@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.927] [info][status][plugin:logstash@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.932] [info][status][plugin:beats_management@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.945] [info][status][plugin:apm_oss@undefined] Status changed from uninitialized to green - Ready log [14:59:49.956] [info][status][plugin:apm@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.957] [info][status][plugin:tile_map@undefined] Status changed from uninitialized to green - Ready log [14:59:49.959] [info][status][plugin:task_manager@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.961] [info][status][plugin:maps@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.965] [info][status][plugin:interpreter@undefined] Status changed from uninitialized to green - Ready log [14:59:49.972] [info][status][plugin:canvas@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.975] [info][status][plugin:license_management@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.977] [info][status][plugin:cloud@7.1.1] Status changed from uninitialized to green - Ready log [14:59:49.978] [info][status][plugin:index_management@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:49.999] [info][status][plugin:console@undefined] Status changed from uninitialized to green - Ready log [14:59:50.002] [info][status][plugin:console_extensions@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.004] [info][status][plugin:notifications@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.006] [info][status][plugin:index_lifecycle_management@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:50.039] [info][status][plugin:infra@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.041] [info][status][plugin:rollup@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:50.048] [info][status][plugin:remote_clusters@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:50.053] [info][status][plugin:cross_cluster_replication@7.1.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch log [14:59:50.061] [info][status][plugin:translations@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.069] [info][status][plugin:upgrade_assistant@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.084] [info][status][plugin:uptime@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.086] [info][status][plugin:oss_telemetry@7.1.1] Status changed from uninitialized to green - Ready log [14:59:50.094] [info][status][plugin:metrics@undefined] Status changed from uninitialized to green - Ready log [14:59:50.210] [info][status][plugin:timelion@undefined] Status changed from uninitialized to green - Ready log [14:59:50.507] [info][status][plugin:elasticsearch@undefined] Status changed from yellow to green - Ready log [14:59:50.513] [error][status][plugin:xpack_main@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.514] [error][status][plugin:graph@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.514] [error][status][plugin:spaces@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.514] [error][status][plugin:security@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.515] [error][status][plugin:searchprofiler@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.515] [error][status][plugin:ml@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.515] [error][status][plugin:tilemap@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.515] [error][status][plugin:watcher@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.516] [error][status][plugin:grokdebugger@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.516] [error][status][plugin:logstash@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.516] [error][status][plugin:beats_management@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.516] [error][status][plugin:maps@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:index_management@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:index_lifecycle_management@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:rollup@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:remote_clusters@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.517] [error][status][plugin:cross_cluster_replication@7.1.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information. log [14:59:50.901] [warning][browser-driver][reporting] Enabling the Chromium sandbox provides an additional layer of protection. log [14:59:50.903] [warning][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml log [14:59:50.919] [error][status][plugin:reporting@7.1.1] Status changed from uninitialized to red - [data] Elasticsearch cluster did not respond with license information. error [15:00:20.514] [warning][process] UnhandledPromiseRejectionWarning: Error: Request Timeout after 30000ms at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15 at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7) at ontimeout (timers.js:436:11) at tryOnTimeout (timers.js:300:5) at listOnTimeout (timers.js:263:5) at Timer.processTimers (timers.js:223:10) at emitWarning (internal/process/promises.js:81:15) at emitPromiseRejectionWarnings (internal/process/promises.js:120:9) at process._tickCallback (internal/process/next_tick.js:69:34) error [15:00:20.515] [warning][process] Error: Request Timeout after 30000ms at /usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:355:15 at Timeout.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:384:7) at ontimeout (timers.js:436:11) at tryOnTimeout (timers.js:300:5) at listOnTimeout (timers.js:263:5) at Timer.processTimers (timers.js:223:10) log [15:00:20.908] [warning][reporting] Could not retrieve cluster settings, because of Request Timeout after 30000ms log [15:00:20.940] [warning][task_manager] PollError Request Timeout after 30000ms log [15:00:20.942] [warning][maps] Error scheduling telemetry task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized! log [15:00:20.943] [warning][telemetry] Error scheduling task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized! I want Kibana to start without failing. Kibana.yml: server.port: 5601 server.host: "xxx.xx.xx.x" server.name: "elk-log-kibana" elasticsearch.hosts: "[URL] server.basePath: "/kibana" server.rewriteBasePath: true elasticsearch.yml: cluster.name: elk-log-elasticsearch path.data: /var/lib/elasticsearch path.logs: /var/log/elasticsearch http.port: 9200 network.host: 0.0.0.0 discovery.seed_hosts: 127.0.0.1 Elasticsearch works fine: curl -v [URL] * About to connect() to localhost port 9200 (#0) * Trying ::1... * Connected to localhost (::1) port 9200 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.29.0 > Host: localhost:9200 > Accept: */* > < HTTP/1.1 200 OK
elasticsearch, kibana, redhat
3
8,900
1
https://stackoverflow.com/questions/56465589/kibana-request-timeout-after-30000ms-at-usr-share-kibana-node-modules-elastic
56,423,862
Ansible disable repo not functioning as expected
I've tried this with two different modules, however I always get status returned: "OK", rather than the expected "Changed". Checking the Server also shows that no changes have been made and the repo is still active: - hosts: rh_estate user: whatuser gather_facts: true become: true tasks: - name: Disable YUM Repo yum_repository: name: rhui-rhel-7-server-rhui-extras-debug-rpms state: absent when: ansible_facts['distribution'] == "RedHat" And with the Yum Module: - name: Disable YUM Repo yum: disablerepo: rhui-rhel-7-server-rhui-extras-debug-rpms when: ansible_facts['distribution'] == "RedHat" I would rather use modules than Line in file. I suppose if there is actually no other way, I would prefer the shell yum-config-manager --disable rhui-rhel-7-server-rhui-extras-debug-rpms Repo Declaration: /etc/yum.repos.d/rh-cloud.repo [rhui-rhel-7-server-rhui-extras-debug-rpms] name=Red Hat Enterprise Linux 7 Server - Extras from RHUI (Debug RPMs) baseurl=[URL] [URL] [URL] enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify=1 sslclientcert=/etc/pki/rhui/product/content.crt sslclientkey=/etc/pki/rhui/key.pem Output of yum repolist all: [root@server ~]# yum repolist all | grep 'repo id\|rhui-rhel-7-server-rhui-extras' repo id status rhui-rhel-7-server-rhui-extras-debug-rpms/x86_64 enabled: 262 rhui-rhel-7-server-rhui-extras-rpms/x86_64 enabled: 1,105 rhui-rhel-7-server-rhui-extras-source-rpms/x86_64 enabled: 430
Ansible disable repo not functioning as expected I've tried this with two different modules, however I always get status returned: "OK", rather than the expected "Changed". Checking the Server also shows that no changes have been made and the repo is still active: - hosts: rh_estate user: whatuser gather_facts: true become: true tasks: - name: Disable YUM Repo yum_repository: name: rhui-rhel-7-server-rhui-extras-debug-rpms state: absent when: ansible_facts['distribution'] == "RedHat" And with the Yum Module: - name: Disable YUM Repo yum: disablerepo: rhui-rhel-7-server-rhui-extras-debug-rpms when: ansible_facts['distribution'] == "RedHat" I would rather use modules than Line in file. I suppose if there is actually no other way, I would prefer the shell yum-config-manager --disable rhui-rhel-7-server-rhui-extras-debug-rpms Repo Declaration: /etc/yum.repos.d/rh-cloud.repo [rhui-rhel-7-server-rhui-extras-debug-rpms] name=Red Hat Enterprise Linux 7 Server - Extras from RHUI (Debug RPMs) baseurl=[URL] [URL] [URL] enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify=1 sslclientcert=/etc/pki/rhui/product/content.crt sslclientkey=/etc/pki/rhui/key.pem Output of yum repolist all: [root@server ~]# yum repolist all | grep 'repo id\|rhui-rhel-7-server-rhui-extras' repo id status rhui-rhel-7-server-rhui-extras-debug-rpms/x86_64 enabled: 262 rhui-rhel-7-server-rhui-extras-rpms/x86_64 enabled: 1,105 rhui-rhel-7-server-rhui-extras-source-rpms/x86_64 enabled: 430
linux, bash, ansible, redhat
3
3,511
1
https://stackoverflow.com/questions/56423862/ansible-disable-repo-not-functioning-as-expected
49,618,032
High CPU usage by sssd_nss during heavy disk IO
I'm on Oracle Enterprise Linux 7u2 where I perform frequent, heavy maven builds which generate a large number of jars/wars/ears. What I've noticed recently (after some of the meltdown / spectre patches) is very heavy CPU utilization by this process: /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --debug-to-files When my server is idle? No problems. But during the heavy disk IO portions of my maven builds, the maven java process and sssd_nss fight over CPU, each taking about 50% of the total. (For reference, I have a 4 core Xeon server) I don't really know this process is (except that it might deal with LDAP?) or why it would care about java file copying and zipping. (This is all on local / non-NFS disk)
High CPU usage by sssd_nss during heavy disk IO I'm on Oracle Enterprise Linux 7u2 where I perform frequent, heavy maven builds which generate a large number of jars/wars/ears. What I've noticed recently (after some of the meltdown / spectre patches) is very heavy CPU utilization by this process: /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --debug-to-files When my server is idle? No problems. But during the heavy disk IO portions of my maven builds, the maven java process and sssd_nss fight over CPU, each taking about 50% of the total. (For reference, I have a 4 core Xeon server) I don't really know this process is (except that it might deal with LDAP?) or why it would care about java file copying and zipping. (This is all on local / non-NFS disk)
linux, centos, redhat, oracle-enterprise-linux
3
7,970
1
https://stackoverflow.com/questions/49618032/high-cpu-usage-by-sssd-nss-during-heavy-disk-io
34,948,458
Kernel build for CentOS 7: kernel-firmware not generated
I'm in the process of rebuilding the Linux kernel for Cent OS 7 to select a different preemption level. My steps follow: sudo yum install rpm-build redhat-rpm-config asciidoc hmaccalc perl-ExtUtils-Embed pesign xmlto audit-libs-devel binutils-devel elfutils-devel elfutils-libelf-devel ncurses-devel newt-devel numactl-devel pciutils-devel python-devel zlib-devel gcc patchutils bison make gcc redhat-rpm-config mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS} echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros wget [URL] rpm -i kernel-3.10.0-327.4.4.el7.src.rpm cd ~/rpmbuild/SPECS rpmbuild -bp --target=$(uname -m) kernel.spec Kernel in BUILD configured and config file copied in SOURCES rpmbuild -bb --with firmware --without kabichk --without debug --without debug-info --without doc --target=uname -m kernel.spec 2> build-err.log | tee build-out.log rpmbuild -bb --with firmware --without kabichk --without debug --without debug-info --without doc --target=noarch kernel.spec 2> build-err.log | tee build-out.log ( --without kabichk is needed because the new preemption level somehow breaks the current ABI) The problem is that the package kernel-firmware gets not generated. Any idea of what is missing ?
Kernel build for CentOS 7: kernel-firmware not generated I'm in the process of rebuilding the Linux kernel for Cent OS 7 to select a different preemption level. My steps follow: sudo yum install rpm-build redhat-rpm-config asciidoc hmaccalc perl-ExtUtils-Embed pesign xmlto audit-libs-devel binutils-devel elfutils-devel elfutils-libelf-devel ncurses-devel newt-devel numactl-devel pciutils-devel python-devel zlib-devel gcc patchutils bison make gcc redhat-rpm-config mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS} echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros wget [URL] rpm -i kernel-3.10.0-327.4.4.el7.src.rpm cd ~/rpmbuild/SPECS rpmbuild -bp --target=$(uname -m) kernel.spec Kernel in BUILD configured and config file copied in SOURCES rpmbuild -bb --with firmware --without kabichk --without debug --without debug-info --without doc --target=uname -m kernel.spec 2> build-err.log | tee build-out.log rpmbuild -bb --with firmware --without kabichk --without debug --without debug-info --without doc --target=noarch kernel.spec 2> build-err.log | tee build-out.log ( --without kabichk is needed because the new preemption level somehow breaks the current ABI) The problem is that the package kernel-firmware gets not generated. Any idea of what is missing ?
linux, linux-kernel, centos, redhat, rpmbuild
3
1,400
1
https://stackoverflow.com/questions/34948458/kernel-build-for-centos-7-kernel-firmware-not-generated
32,654,892
chrony with intermittent refclock
We have a machine with a GPS connected that we're using as a timeserver, by way of gpsd and chrony. The problem is, the GPS is occasionally unavailable. We'd still like the other machines on our network to continue syncing their time to the GPS-controlled timeserver. But we're not sure how to configure chrony to ensure that this takes place. Even if the GPS is offline for an extended period, we still want all other machines to sync to this one, despite the possibility that it's clock has drifted from real GPS time. It looks like the 'local' configuration setting may be helpful, but the documentation is sketchy. Will this setting tell chrony to fall back to the local system clock if a reference clock is not available, but then sync itself with the reference clock when it comes back online? Or is there some other approach altogether that will work? These are all Redhat 6.5 systems, running chrony 1.31.1.
chrony with intermittent refclock We have a machine with a GPS connected that we're using as a timeserver, by way of gpsd and chrony. The problem is, the GPS is occasionally unavailable. We'd still like the other machines on our network to continue syncing their time to the GPS-controlled timeserver. But we're not sure how to configure chrony to ensure that this takes place. Even if the GPS is offline for an extended period, we still want all other machines to sync to this one, despite the possibility that it's clock has drifted from real GPS time. It looks like the 'local' configuration setting may be helpful, but the documentation is sketchy. Will this setting tell chrony to fall back to the local system clock if a reference clock is not available, but then sync itself with the reference clock when it comes back online? Or is there some other approach altogether that will work? These are all Redhat 6.5 systems, running chrony 1.31.1.
redhat, ntp, gps-time
3
1,118
1
https://stackoverflow.com/questions/32654892/chrony-with-intermittent-refclock
30,321,805
rpmbuild no such file or directory
I'm just learning making rpm packages for some custom builds of software that gets compiled from source (some legacy stuff needs this, so I'm trying to learn, as some packages can't use the latest versions), but hitting an error (I'm doing this in Vagrant, and also as root, but typically I'm trying not to use root as I'm aware it has potential for damage, its just this example seems to need some root changes). sudo rpmbuild -ba testspec.spec --define "_topdir /tmp/" So far it looks to be using the directory I expected, /tmp/rpmbuild make[2]: Entering directory /tmp/rpmbuild/BUILD/exim-4.80.1/build-Linux-x86_64/pdkim' make[2]: pdkim.a' is up to date. make[2]: Leaving directory /tmp/rpmbu But then I see these errors... /usr/lib/rpm/brp-compress: line 8: cd: /tmp/BUILDROOT/custom-exim-4.80.1-1.x86_64: No such file or directory + /usr/lib/rpm/brp-strip find: /tmp/BUILDROOT/custom-exim-4.80.1-1.x86_64': No such file or directory + /usr/lib/rpm/brp-strip-static-archive find: `/tmp/BUILDROOT/custom-exim-4.80.1-1.x86_64': No such file or directory + /usr/lib/rpm/brp-strip-comment-note So it now seems to be looking in /tmp/BUILDROOT I'm new to rpmbuild, and don't quite understand some of the process. My test spec file is at... %define myversion exim-4.80.1 ##%define mybase %{getenv:HOME} %define mybase /tmp %define _topdir %{mybase}/rpmbuild %define _tmppath %{mybase}/rpmbuild/tmp %define name custom-exim %define release 1 %define version 4.80.1 %define buildroot %{_topdir}/%{name}-%{version}-root BuildRoot: %{buildroot} Summary: %{name} Name: %{name} Version: %{version} Release: %{release} Source0: ftp://exim.noris.de/exim/exim4/old/exim-4.80.1.tar.gz License: GPLv1+ Group: Language AutoReq: no AutoProv: no Requires: db4-devel pcre-devel libdb-devel libXt-devel libXaw-devel %description Custom Exim Build %prep #Do the following manually before building rpm #mkdir -p /tmp/rpmbuild/BUILD /tmp/rpmbuild/SPECS /tmp/rpmbuild/SOURCES /tmp/rpmbuild/BUILDROOT /tmp/rpmbuild/RPMS /tmp/rpmbuild/SRPMS #wget ftp://exim.noris.de/exim/exim4/old/exim-4.80.1.tar.gz -O /tmp/rpmbuild/SOURCES/exim-4.80.1.tar.gz %setup -q -n %{myversion} grep exim /etc/passwd || useradd -c "Exim" -d /var/spool/exim -m -s /bin/bash exim %build # exim needs to config changes before compiling, may do these first and repackage cp %{mybase}/rpmbuild/BUILD/%{myversion}/src/EDITME %{mybase}/rpmbuild/BUILD/%{myversion}/Local/Makefile cp %{mybase}/rpmbuild/BUILD/%{myversion}/exim_monitor/EDITME %{mybase}/rpmbuild/BUILD/%{myversion}/Local/eximon.conf sed -i -e 's/EXIM_USER=$/EXIM_USER=exim/g' "%{mybase}/rpmbuild/BUILD/%{myversion}/Local/Makefile" sed -i -e 's/LOOKUP_DNSDB=yes/#LOOKUP_DNSDB=yes/g' "%{mybase}/rpmbuild/BUILD/%{myversion}/Local/Makefile" make %install rm -rf $RPM_BUILD_ROOT #%{__mkdir_p} '%{buildroot}%{_sbindir}' make install %clean rm -rf $RPM_BUILD_ROOT %post %postun %files Why is it using /tmp/BUILDROOT literally, instead of /tmp/rpmbuild, and are there other obvious things I'm doing wrong ? I've looked at a lot of other tutorials on rpmbuild, but aren't very clear on best practices or what happens during each phase.
rpmbuild no such file or directory I'm just learning making rpm packages for some custom builds of software that gets compiled from source (some legacy stuff needs this, so I'm trying to learn, as some packages can't use the latest versions), but hitting an error (I'm doing this in Vagrant, and also as root, but typically I'm trying not to use root as I'm aware it has potential for damage, its just this example seems to need some root changes). sudo rpmbuild -ba testspec.spec --define "_topdir /tmp/" So far it looks to be using the directory I expected, /tmp/rpmbuild make[2]: Entering directory /tmp/rpmbuild/BUILD/exim-4.80.1/build-Linux-x86_64/pdkim' make[2]: pdkim.a' is up to date. make[2]: Leaving directory /tmp/rpmbu But then I see these errors... /usr/lib/rpm/brp-compress: line 8: cd: /tmp/BUILDROOT/custom-exim-4.80.1-1.x86_64: No such file or directory + /usr/lib/rpm/brp-strip find: /tmp/BUILDROOT/custom-exim-4.80.1-1.x86_64': No such file or directory + /usr/lib/rpm/brp-strip-static-archive find: `/tmp/BUILDROOT/custom-exim-4.80.1-1.x86_64': No such file or directory + /usr/lib/rpm/brp-strip-comment-note So it now seems to be looking in /tmp/BUILDROOT I'm new to rpmbuild, and don't quite understand some of the process. My test spec file is at... %define myversion exim-4.80.1 ##%define mybase %{getenv:HOME} %define mybase /tmp %define _topdir %{mybase}/rpmbuild %define _tmppath %{mybase}/rpmbuild/tmp %define name custom-exim %define release 1 %define version 4.80.1 %define buildroot %{_topdir}/%{name}-%{version}-root BuildRoot: %{buildroot} Summary: %{name} Name: %{name} Version: %{version} Release: %{release} Source0: ftp://exim.noris.de/exim/exim4/old/exim-4.80.1.tar.gz License: GPLv1+ Group: Language AutoReq: no AutoProv: no Requires: db4-devel pcre-devel libdb-devel libXt-devel libXaw-devel %description Custom Exim Build %prep #Do the following manually before building rpm #mkdir -p /tmp/rpmbuild/BUILD /tmp/rpmbuild/SPECS /tmp/rpmbuild/SOURCES /tmp/rpmbuild/BUILDROOT /tmp/rpmbuild/RPMS /tmp/rpmbuild/SRPMS #wget ftp://exim.noris.de/exim/exim4/old/exim-4.80.1.tar.gz -O /tmp/rpmbuild/SOURCES/exim-4.80.1.tar.gz %setup -q -n %{myversion} grep exim /etc/passwd || useradd -c "Exim" -d /var/spool/exim -m -s /bin/bash exim %build # exim needs to config changes before compiling, may do these first and repackage cp %{mybase}/rpmbuild/BUILD/%{myversion}/src/EDITME %{mybase}/rpmbuild/BUILD/%{myversion}/Local/Makefile cp %{mybase}/rpmbuild/BUILD/%{myversion}/exim_monitor/EDITME %{mybase}/rpmbuild/BUILD/%{myversion}/Local/eximon.conf sed -i -e 's/EXIM_USER=$/EXIM_USER=exim/g' "%{mybase}/rpmbuild/BUILD/%{myversion}/Local/Makefile" sed -i -e 's/LOOKUP_DNSDB=yes/#LOOKUP_DNSDB=yes/g' "%{mybase}/rpmbuild/BUILD/%{myversion}/Local/Makefile" make %install rm -rf $RPM_BUILD_ROOT #%{__mkdir_p} '%{buildroot}%{_sbindir}' make install %clean rm -rf $RPM_BUILD_ROOT %post %postun %files Why is it using /tmp/BUILDROOT literally, instead of /tmp/rpmbuild, and are there other obvious things I'm doing wrong ? I've looked at a lot of other tutorials on rpmbuild, but aren't very clear on best practices or what happens during each phase.
centos, redhat, rpmbuild, rpm-spec
3
10,664
2
https://stackoverflow.com/questions/30321805/rpmbuild-no-such-file-or-directory
29,522,013
How do you activate a ruleflow-group in drools
Can you please help me understand how to fire a specific group of rules in drools 6 final? I have total of more than 100 rules. I have grouped the rules using ruleflow-group but I don't know how to activate a ruleflow-group. I need to do something like this: if (a == x) fireRuleflowOne if (a == y) fireRuleFlowTwo I am using StatefulKnowledgeSession and there is nothing in the api that I can use to fire/activate a specific rule group. Before/when calling fireAllRules I want to tell it to fireGroupOfRules. StatefulKnowledgeSession session = knowledgeBase.newStatefulKnowledgeSession(); session.insert(facts); session.fireAllRules(); Please let me know if you need more detail. thanks in advance
How do you activate a ruleflow-group in drools Can you please help me understand how to fire a specific group of rules in drools 6 final? I have total of more than 100 rules. I have grouped the rules using ruleflow-group but I don't know how to activate a ruleflow-group. I need to do something like this: if (a == x) fireRuleflowOne if (a == y) fireRuleFlowTwo I am using StatefulKnowledgeSession and there is nothing in the api that I can use to fire/activate a specific rule group. Before/when calling fireAllRules I want to tell it to fireGroupOfRules. StatefulKnowledgeSession session = knowledgeBase.newStatefulKnowledgeSession(); session.insert(facts); session.fireAllRules(); Please let me know if you need more detail. thanks in advance
java, drools, redhat
3
5,756
1
https://stackoverflow.com/questions/29522013/how-do-you-activate-a-ruleflow-group-in-drools
28,865,400
Is there an rpm for Tomcat 7 on RedHat Enterprise 6?
I am looking to install Tomcat 7 on RHEL6 using an RPM package, but it seems difficult to locate an RPM which installs components to their standard RedHat locations. Is there a simple community RPM for this?
Is there an rpm for Tomcat 7 on RedHat Enterprise 6? I am looking to install Tomcat 7 on RHEL6 using an RPM package, but it seems difficult to locate an RPM which installs components to their standard RedHat locations. Is there a simple community RPM for this?
jakarta-ee, tomcat7, redhat, rpm, rhel6
3
1,877
2
https://stackoverflow.com/questions/28865400/is-there-an-rpm-for-tomcat-7-on-redhat-enterprise-6
21,775,765
what is significance of bind to zero address
I was seeing EADDRNOTAVAIL errors in connect() calls. I dig deeper found that naming of sockets were being done of over zero ip addresses . See following where both calls were sucessful:- setsockopt(s, SOL_SOCKET, SO_REUSEADDR, (char *)&y, sizeof(y)); /* y is int with value 1 */ bind(s, (struct sockaddr *)lockaddr, sizeof(rtinetaddr_tp)); where lockaddr={.sin_family=2, .sin_port=0, .sin_addr={.s_addr=0}, .sin_zero=""} This , I found in RH site and also I have the same kernel. My question is what if I remove doing any bind() at client side of the application? Will that be a quick cure OR will lead to any disaster? Other way I have running sample programs without bind at client. But the app I am talking about that establishes hundreds of connections. So what may happen in worst case?
what is significance of bind to zero address I was seeing EADDRNOTAVAIL errors in connect() calls. I dig deeper found that naming of sockets were being done of over zero ip addresses . See following where both calls were sucessful:- setsockopt(s, SOL_SOCKET, SO_REUSEADDR, (char *)&y, sizeof(y)); /* y is int with value 1 */ bind(s, (struct sockaddr *)lockaddr, sizeof(rtinetaddr_tp)); where lockaddr={.sin_family=2, .sin_port=0, .sin_addr={.s_addr=0}, .sin_zero=""} This , I found in RH site and also I have the same kernel. My question is what if I remove doing any bind() at client side of the application? Will that be a quick cure OR will lead to any disaster? Other way I have running sample programs without bind at client. But the app I am talking about that establishes hundreds of connections. So what may happen in worst case?
c, linux, sockets, bind, redhat
3
1,791
2
https://stackoverflow.com/questions/21775765/what-is-significance-of-bind-to-zero-address
17,885,556
Linking R packages during installation to a Linux RPM
I have installed the rpm of GMP version 4.3.1 but when I try to download the R package 'gmp' it fails with the following error saying that it cannot find GMP. */checking for __gmpz_ui_sub in -lgmp... no configure: error: GNU MP not found, or not 4.1.4 or up, see [URL] ERROR: configuration failed for package gmp*/ This verifies I have gmp installed */$ rpm -q gmp gmp-4.3.1-7.el6_2.2.x86_64/* Is there a command I can add to install.packages("gmp") that will point to the GMP rpm?
Linking R packages during installation to a Linux RPM I have installed the rpm of GMP version 4.3.1 but when I try to download the R package 'gmp' it fails with the following error saying that it cannot find GMP. */checking for __gmpz_ui_sub in -lgmp... no configure: error: GNU MP not found, or not 4.1.4 or up, see [URL] ERROR: configuration failed for package gmp*/ This verifies I have gmp installed */$ rpm -q gmp gmp-4.3.1-7.el6_2.2.x86_64/* Is there a command I can add to install.packages("gmp") that will point to the GMP rpm?
linux, r, redhat, gmp
3
1,176
1
https://stackoverflow.com/questions/17885556/linking-r-packages-during-installation-to-a-linux-rpm
11,891,752
How to fix?: mysql&gt; show variables like &#39;plugin_dir&#39;; Does not show plugin_dir location
After this post I continued to try to setup MySQL memcached User-Defined Functions as per these instructions: [URL] But now when trying to find the plugin_dir location I get: mysql> show variables like 'plugin_dir'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | plugin_dir | | +---------------+-------+ 1 row in set (0.00 sec) It's blank. What did I miss? Thanks
How to fix?: mysql&gt; show variables like &#39;plugin_dir&#39;; Does not show plugin_dir location After this post I continued to try to setup MySQL memcached User-Defined Functions as per these instructions: [URL] But now when trying to find the plugin_dir location I get: mysql> show variables like 'plugin_dir'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | plugin_dir | | +---------------+-------+ 1 row in set (0.00 sec) It's blank. What did I miss? Thanks
mysql, memcached, redhat, libmemcache, libmemcached
3
6,471
2
https://stackoverflow.com/questions/11891752/how-to-fix-mysql-show-variables-like-plugin-dir-does-not-show-plugin-dir-l
7,693,993
Default Path for WebLogic site root on RedHat 5
I am flying blind and could use some help. I am a long time windows web developer/web admin and I have inherited a WebLogic 11g/RHEL5 box. I'm trying to figure out where the website files might be. My only contact with the box is through FTP, and I'm not sure the account I'm using has all of the permissions I need. Googling led me to check /opt/bea, but there is no/bea folder under /opt. Another possibility was /var/local/WebLogic, but there was no /WebLogic folder. Any help would be greatly appreciated.
Default Path for WebLogic site root on RedHat 5 I am flying blind and could use some help. I am a long time windows web developer/web admin and I have inherited a WebLogic 11g/RHEL5 box. I'm trying to figure out where the website files might be. My only contact with the box is through FTP, and I'm not sure the account I'm using has all of the permissions I need. Googling led me to check /opt/bea, but there is no/bea folder under /opt. Another possibility was /var/local/WebLogic, but there was no /WebLogic folder. Any help would be greatly appreciated.
web, redhat, weblogic11g, rhel5
3
1,208
1
https://stackoverflow.com/questions/7693993/default-path-for-weblogic-site-root-on-redhat-5
4,252,041
Setting PATH for &#39;ROOT&#39; in Red Hat 5
I have edited '/etc/profile' and added the following: export JAVA_HOME=/usr/java/jdk1.6.0_21 However, when logged in as 'root': '# echo $JAVA_HOME' lists a different path. How do I configure 'root' to pick the above path? NB: Exporting paths in 'bashrc' or '.bash_profile', for root, did not work for account 'root'.
Setting PATH for &#39;ROOT&#39; in Red Hat 5 I have edited '/etc/profile' and added the following: export JAVA_HOME=/usr/java/jdk1.6.0_21 However, when logged in as 'root': '# echo $JAVA_HOME' lists a different path. How do I configure 'root' to pick the above path? NB: Exporting paths in 'bashrc' or '.bash_profile', for root, did not work for account 'root'.
bash, path, redhat
3
15,354
1
https://stackoverflow.com/questions/4252041/setting-path-for-root-in-red-hat-5
4,174,688
what does &quot;possible SYN flooding on port 8009. Sending cookies&quot; mean in /var/log/messages?
I have a web application setup apache+mod_jk+tomcat(connector for mod_jk on 8009 port). Recently my app started to hang few times a day and in /var/logs/messages there are entries like "possible SYN flooding on port 8009. Sending cookies" with 30-60 seconds. I have to restart each time when the app hangs. Is it DDOS attack ? or system/application errors can cause this problem ? Any help would be highly appreciated. Thanks.
what does &quot;possible SYN flooding on port 8009. Sending cookies&quot; mean in /var/log/messages? I have a web application setup apache+mod_jk+tomcat(connector for mod_jk on 8009 port). Recently my app started to hang few times a day and in /var/logs/messages there are entries like "possible SYN flooding on port 8009. Sending cookies" with 30-60 seconds. I have to restart each time when the app hangs. Is it DDOS attack ? or system/application errors can cause this problem ? Any help would be highly appreciated. Thanks.
apache, tomcat, redhat, mod-jk, flooding
3
11,056
2
https://stackoverflow.com/questions/4174688/what-does-possible-syn-flooding-on-port-8009-sending-cookies-mean-in-var-log
66,355,485
How can I get userinfo from the bearer token of RedHat Openshift?
I have configured the OAuth client on RedHat OpenShift so that I can do SSO for my application using the inbuilt OAuth server of the RedHat OpenShift cluster. I got redirected to OCP login page, authenticated via OCP, and got the access_token as well. But now I want to get userinfo from the token I got. But it seems API /oauth/userinfo is not returning the user information. Getting this error when I try GET /oauth/userinfo REST API /oauth/userinfo Am I missing something?
How can I get userinfo from the bearer token of RedHat Openshift? I have configured the OAuth client on RedHat OpenShift so that I can do SSO for my application using the inbuilt OAuth server of the RedHat OpenShift cluster. I got redirected to OCP login page, authenticated via OCP, and got the access_token as well. But now I want to get userinfo from the token I got. But it seems API /oauth/userinfo is not returning the user information. Getting this error when I try GET /oauth/userinfo REST API /oauth/userinfo Am I missing something?
oauth-2.0, openshift, redhat
3
1,039
1
https://stackoverflow.com/questions/66355485/how-can-i-get-userinfo-from-the-bearer-token-of-redhat-openshift
64,921,412
How to get artifactory to use IPV4 instead of IPV6
I'm trying to install Artifactory on CentOS 8. While the installation proceeds reasonably, the configuration ends up binding to IPV6 rather than IPV4 interfaces. This of course seems to make it inaccessible on the IPV4 network. I've tried putting an IPV4 address in var/etc/system.yaml as described in: jfrog artifactory could not validate router error without effect. I've tried disabling the IPV6 interface, but that doesn't seem to be suffient. Any hints would be most welcome.
How to get artifactory to use IPV4 instead of IPV6 I'm trying to install Artifactory on CentOS 8. While the installation proceeds reasonably, the configuration ends up binding to IPV6 rather than IPV4 interfaces. This of course seems to make it inaccessible on the IPV4 network. I've tried putting an IPV4 address in var/etc/system.yaml as described in: jfrog artifactory could not validate router error without effect. I've tried disabling the IPV6 interface, but that doesn't seem to be suffient. Any hints would be most welcome.
redhat, artifactory, ipv4, centos8
3
1,324
1
https://stackoverflow.com/questions/64921412/how-to-get-artifactory-to-use-ipv4-instead-of-ipv6
52,517,184
In /var/log/messages, what is: Watching system buttons [...] Power Button
Currently reading through /var/log/messages and I can see occurrences of: *systemd-logind: Watching power buttons on /dev/input/event0 (Power Button)* *systemd-logind: Watching power buttons on /dev/input/event1 (Sleep Button)* What do these entries mean on a RHEL system?
In /var/log/messages, what is: Watching system buttons [...] Power Button Currently reading through /var/log/messages and I can see occurrences of: *systemd-logind: Watching power buttons on /dev/input/event0 (Power Button)* *systemd-logind: Watching power buttons on /dev/input/event1 (Sleep Button)* What do these entries mean on a RHEL system?
linux, logging, redhat, systemd
3
10,415
1
https://stackoverflow.com/questions/52517184/in-var-log-messages-what-is-watching-system-buttons-power-button
52,148,625
Unexpected failed dependencies when uninstalling a package using RPM
When checking for packages that depend on a particular package (in this case lz4 ) using rpm it does not list any packages that require either lz4-1.7.5-2.el7.i686 and lz4-1.7.5-2.el7.x86_64 ... # rpm -q --whatrequires lz4-1.7.5-2.el7.i686 no package requires lz4-1.7.5-2.el7.i686 # rpm -q --whatrequires lz4-1.7.5-2.el7.x86_64 no package requires lz4-1.7.5-2.el7.x86_64 # But I can't uninstall either of them without using rpm --nodeps as they appear to be needed by systemd and/or systemd-libs . # rpm --erase --allmatches lz4 error: Failed dependencies: liblz4.so.1()(64bit) is needed by (installed) systemd-libs-219-57.el7_5.1.x86_64 liblz4.so.1()(64bit) is needed by (installed) systemd-219-57.el7_5.1.x86_64 liblz4.so.1 is needed by (installed) systemd-libs-219-57.el7_5.1.i686 # It looks like the output of rpm --whatrequires is wrong but is it? (I doubt that it is actually wrong - but I don't understand why doesn't it include systemd or systemd-libs ? I thought if using rpm --erase --test instead of rpm --whatrequires to identify if packages that have dependencies but is there another more reliable way to do this? Thanks for your help.
Unexpected failed dependencies when uninstalling a package using RPM When checking for packages that depend on a particular package (in this case lz4 ) using rpm it does not list any packages that require either lz4-1.7.5-2.el7.i686 and lz4-1.7.5-2.el7.x86_64 ... # rpm -q --whatrequires lz4-1.7.5-2.el7.i686 no package requires lz4-1.7.5-2.el7.i686 # rpm -q --whatrequires lz4-1.7.5-2.el7.x86_64 no package requires lz4-1.7.5-2.el7.x86_64 # But I can't uninstall either of them without using rpm --nodeps as they appear to be needed by systemd and/or systemd-libs . # rpm --erase --allmatches lz4 error: Failed dependencies: liblz4.so.1()(64bit) is needed by (installed) systemd-libs-219-57.el7_5.1.x86_64 liblz4.so.1()(64bit) is needed by (installed) systemd-219-57.el7_5.1.x86_64 liblz4.so.1 is needed by (installed) systemd-libs-219-57.el7_5.1.i686 # It looks like the output of rpm --whatrequires is wrong but is it? (I doubt that it is actually wrong - but I don't understand why doesn't it include systemd or systemd-libs ? I thought if using rpm --erase --test instead of rpm --whatrequires to identify if packages that have dependencies but is there another more reliable way to do this? Thanks for your help.
redhat, rpm
3
900
2
https://stackoverflow.com/questions/52148625/unexpected-failed-dependencies-when-uninstalling-a-package-using-rpm
49,686,375
Can I run oc commands in openshift pod terminals?
Is there any way that I can run the oc commands on pod terminals? What I am trying to do is let the user login using oc login Then run the command to get the token. oc whoami -t And then use that token to call the REST APIs of openshift. This way works on local environment but on openshift, there are some permission issues as I guess openshift doesn't give the root permissions to the user. it says permission denied. EDIT So basically i want to be able to get that BEARER token I can send in the HEADERS of the REST APIs to create pods, services, routes etc. And i want that token before any pod is made because i am going to use that token to create pods. It might sound silly I know but that's what I want to know if it is possible, the way we do it using command line using oc commands, is it possible on openshift. The other possible way could be to call an API that gives me a token and then use that token in other API calls. @gshipley It does sound like a chiken egg problem to me. But if i were to explain you what I do on my local machine, all i would want is to replicate that on openshift if it is possible. I run the oc commands on nodejs, oc.exe file is there in my repository. I run oc login and oc whoami -t. I read the token i get and store it. Then I send that token as BEARER in API headers. Thats what works on my local machine. I just want to replicate this scenario on openshift. Is it possible?
Can I run oc commands in openshift pod terminals? Is there any way that I can run the oc commands on pod terminals? What I am trying to do is let the user login using oc login Then run the command to get the token. oc whoami -t And then use that token to call the REST APIs of openshift. This way works on local environment but on openshift, there are some permission issues as I guess openshift doesn't give the root permissions to the user. it says permission denied. EDIT So basically i want to be able to get that BEARER token I can send in the HEADERS of the REST APIs to create pods, services, routes etc. And i want that token before any pod is made because i am going to use that token to create pods. It might sound silly I know but that's what I want to know if it is possible, the way we do it using command line using oc commands, is it possible on openshift. The other possible way could be to call an API that gives me a token and then use that token in other API calls. @gshipley It does sound like a chiken egg problem to me. But if i were to explain you what I do on my local machine, all i would want is to replicate that on openshift if it is possible. I run the oc commands on nodejs, oc.exe file is there in my repository. I run oc login and oc whoami -t. I read the token i get and store it. Then I send that token as BEARER in API headers. Thats what works on my local machine. I just want to replicate this scenario on openshift. Is it possible?
docker, openshift, redhat, openshift-client-tools, openshift-cartridge
3
8,874
1
https://stackoverflow.com/questions/49686375/can-i-run-oc-commands-in-openshift-pod-terminals
47,205,104
Cannot install openssl-devel on redhat 7.3?
I am trying to install openssl-dev package (In order to use it in PACT rust implementation) on "Red Hat Enterprise Linux Server release 7.3", which contains "OpenSSL 1.0.1e-fips 11 Feb 2013" installed version but not its include files as far as I can tell. I tried various options to install it through yum was failed (For example, I tried [URL] ). I am working behind proxy, but I doesn’t seems to be the problem (I can install other things and already add it into /etc/yum.conf). When I am running: yum install openssl-devel I get: This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. ... No package openssl-devel available. I found some workaround with building locally, but it takes a lot of preparing work (setting OPENSSL_LIB_DIR, OPENSSL_INCLUDE_DIR, OPENSSL_STATIC & OPENSSL_DIR environment variables) so it will be very helpful, since I will recreate replacement workstation soon (so you can assume that I will discard any grabge that I already put in the system). Thanks, Assaf
Cannot install openssl-devel on redhat 7.3? I am trying to install openssl-dev package (In order to use it in PACT rust implementation) on "Red Hat Enterprise Linux Server release 7.3", which contains "OpenSSL 1.0.1e-fips 11 Feb 2013" installed version but not its include files as far as I can tell. I tried various options to install it through yum was failed (For example, I tried [URL] ). I am working behind proxy, but I doesn’t seems to be the problem (I can install other things and already add it into /etc/yum.conf). When I am running: yum install openssl-devel I get: This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. ... No package openssl-devel available. I found some workaround with building locally, but it takes a lot of preparing work (setting OPENSSL_LIB_DIR, OPENSSL_INCLUDE_DIR, OPENSSL_STATIC & OPENSSL_DIR environment variables) so it will be very helpful, since I will recreate replacement workstation soon (so you can assume that I will discard any grabge that I already put in the system). Thanks, Assaf
openssl, redhat
3
8,287
1
https://stackoverflow.com/questions/47205104/cannot-install-openssl-devel-on-redhat-7-3
47,189,815
Red Hat Developer: Subscription Attachment
I installed RHEL7 for developers (NO-Cost subscription) and registered it with my username and password for the customer portal , however it doesn't want to attach the subscription. So in order to do it manually I need to run the following command subscription-manager attach --pool=YourID And in order to find my ID I have to run the command subscription-manager list --available Which returns No available subscription pools to list I logged in in my account on developers.redhat.com and it says: But on the customer panel it says I don't have any active subscriptions What am I doing wrong? Thank you in advance!!
Red Hat Developer: Subscription Attachment I installed RHEL7 for developers (NO-Cost subscription) and registered it with my username and password for the customer portal , however it doesn't want to attach the subscription. So in order to do it manually I need to run the following command subscription-manager attach --pool=YourID And in order to find my ID I have to run the command subscription-manager list --available Which returns No available subscription pools to list I logged in in my account on developers.redhat.com and it says: But on the customer panel it says I don't have any active subscriptions What am I doing wrong? Thank you in advance!!
linux, redhat, subscription, rhel7
3
3,634
4
https://stackoverflow.com/questions/47189815/red-hat-developer-subscription-attachment
45,202,649
Yum cannot find the package I want to install
I am trying to a simple command sudo yum install SDL2 . I know that this package exists as per the SDL website: Red Hat-based systems (including Fedora) can simply do "sudo yum install SDL2" to get the library installed system-wide, or "sudo yum install SDL2-devel" to get headers and other build requirements ready for compiling your own SDL programs. However, when I try to execute my command, I get the following: Setting up Install Process No package SDL2 available. Error: Nothing to do I am using Red Hat Enterprise Linux Server release 5.3 (Tikanga). How can I go about getting yum to locate this package?
Yum cannot find the package I want to install I am trying to a simple command sudo yum install SDL2 . I know that this package exists as per the SDL website: Red Hat-based systems (including Fedora) can simply do "sudo yum install SDL2" to get the library installed system-wide, or "sudo yum install SDL2-devel" to get headers and other build requirements ready for compiling your own SDL programs. However, when I try to execute my command, I get the following: Setting up Install Process No package SDL2 available. Error: Nothing to do I am using Red Hat Enterprise Linux Server release 5.3 (Tikanga). How can I go about getting yum to locate this package?
linux, redhat, yum
3
4,639
1
https://stackoverflow.com/questions/45202649/yum-cannot-find-the-package-i-want-to-install
44,716,247
No manual entry for any command
I get the below response on trying to view manual for any command No manual entry for << command >> On $ echo $MANPATH , It says .:/usr/local/man:/usr/man $ echo $PATH gives the following result /usr/local/bin:/mis/TREE/bin:/usr/bin:/bin:/usr/ucb:/proj/blade/tools/bin and on $MANPATH= man -w man it says MANPATH=: Command not found. What could be the issue? How to resolve this? I am on Enterprise Linux 7 (Maipo).
No manual entry for any command I get the below response on trying to view manual for any command No manual entry for << command >> On $ echo $MANPATH , It says .:/usr/local/man:/usr/man $ echo $PATH gives the following result /usr/local/bin:/mis/TREE/bin:/usr/bin:/bin:/usr/ucb:/proj/blade/tools/bin and on $MANPATH= man -w man it says MANPATH=: Command not found. What could be the issue? How to resolve this? I am on Enterprise Linux 7 (Maipo).
linux, redhat, manpage
3
9,354
1
https://stackoverflow.com/questions/44716247/no-manual-entry-for-any-command
44,512,186
Puppet : Copy files only IF the package needs to be installed to the latest
I'm a puppet beginner - so bear with me :) I'm trying to write a module that does the following : Check if a package is installed with the latest version in the repos If the package needs to be installed, then config files will be copied from puppet source location, to client. Then the package will be installed Once files are copied and package installed, run the script that will use the config files on the client to apply the necessary settings. Once all of this are done, remove the copied files on client I've come up with the following : class somepackage( $package_files_base = "/var/tmp", $package_setup = "/var/tmp/package-setup.sh", $ndc_file = "/var/tmp/somefile.ndc", $osd_file = "/var/tmp/somefile.osd", $nds_file = "/var/tmp/somefile.nds", $configini_file = "/var/tmp/somefile.ini", $required_files = ["$package_setup", "$ndc_file", "$osd_file", $nds_file", "$configini_file"]) { package { 'some package': ensure => 'latest', notify => Exec['Package Setup'], } file { 'Package Setup Files': path => $package_files_base, ensure => directory, replace => false, recurse => true, source => "puppet:///modules/somepackage/${::domain}", mode => '0755', } exec { 'Package Setup': command => "$package_setup", logoutput => true, timeout => 1800, require => [ File['Package Setup Files']], refreshonly => true, notify => Exec['Remove config files'], } exec { 'Remove config files': path => ['/usr/bin','/usr/sbin','/bin','/sbin'], command => "rm \"${package_setup}\" \"${ndc_file}\" \"${osd_file}\" \"${nds_file}\" \"${configini_file}\"", refreshonly => true, } } While this achieves most of what I want to do, I notice that upon rerunning puppet apply the files, although they were being removed, were being recopied. I can understand why this happens, but I don't know how to code it so that the files get copied ONLY if the package gets updated/installed (e.g. package wasn't installed or old). Otherwise the files will get copied over and over again every time puppet runs every 30 min (default setup) on the client I assume... I tried using the replace => false to prevent this but that just means the files wont ever get removed from /var/tmp after the first run of the class, because it only prevents subsequent runs of the class to re-copy the files (from my testing). This does prevent the redundant, repetitive copying - however I just want the files to be gone the first time! Is this possible? Head hurts :( Thanks in advance! We're running Puppet version 3.8.6 on EL7.3. EDIT: To be clear, this is the bit that I'm struggling with: the resource file { 'Package Setup Files': . This keeps getting files copied even though the package isn't updated/installed. How do I prevent this from happening?
Puppet : Copy files only IF the package needs to be installed to the latest I'm a puppet beginner - so bear with me :) I'm trying to write a module that does the following : Check if a package is installed with the latest version in the repos If the package needs to be installed, then config files will be copied from puppet source location, to client. Then the package will be installed Once files are copied and package installed, run the script that will use the config files on the client to apply the necessary settings. Once all of this are done, remove the copied files on client I've come up with the following : class somepackage( $package_files_base = "/var/tmp", $package_setup = "/var/tmp/package-setup.sh", $ndc_file = "/var/tmp/somefile.ndc", $osd_file = "/var/tmp/somefile.osd", $nds_file = "/var/tmp/somefile.nds", $configini_file = "/var/tmp/somefile.ini", $required_files = ["$package_setup", "$ndc_file", "$osd_file", $nds_file", "$configini_file"]) { package { 'some package': ensure => 'latest', notify => Exec['Package Setup'], } file { 'Package Setup Files': path => $package_files_base, ensure => directory, replace => false, recurse => true, source => "puppet:///modules/somepackage/${::domain}", mode => '0755', } exec { 'Package Setup': command => "$package_setup", logoutput => true, timeout => 1800, require => [ File['Package Setup Files']], refreshonly => true, notify => Exec['Remove config files'], } exec { 'Remove config files': path => ['/usr/bin','/usr/sbin','/bin','/sbin'], command => "rm \"${package_setup}\" \"${ndc_file}\" \"${osd_file}\" \"${nds_file}\" \"${configini_file}\"", refreshonly => true, } } While this achieves most of what I want to do, I notice that upon rerunning puppet apply the files, although they were being removed, were being recopied. I can understand why this happens, but I don't know how to code it so that the files get copied ONLY if the package gets updated/installed (e.g. package wasn't installed or old). Otherwise the files will get copied over and over again every time puppet runs every 30 min (default setup) on the client I assume... I tried using the replace => false to prevent this but that just means the files wont ever get removed from /var/tmp after the first run of the class, because it only prevents subsequent runs of the class to re-copy the files (from my testing). This does prevent the redundant, repetitive copying - however I just want the files to be gone the first time! Is this possible? Head hurts :( Thanks in advance! We're running Puppet version 3.8.6 on EL7.3. EDIT: To be clear, this is the bit that I'm struggling with: the resource file { 'Package Setup Files': . This keeps getting files copied even though the package isn't updated/installed. How do I prevent this from happening?
puppet, redhat
3
2,665
1
https://stackoverflow.com/questions/44512186/puppet-copy-files-only-if-the-package-needs-to-be-installed-to-the-latest
41,572,052
Fuse file system with &quot;default permission&quot; option
I am new to fuse. I have mounted fuse by the following command. /mnt/fuse -o default_permissions -o allow_other -o nonempty -o hard_remove –d Now If I login as "test" user and tried to create a file called "testfile". test@11540302:/registration> touch testfile touch: setting times of `testfile': Permission denied Strace output: uname({sys="Linux", node="11540302", ...}) = 0 brk(0) = 0x8055000 brk(0x8076000) = 0x8076000 open("testfile", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK|O_LARGEFILE, 0666) = 3 dup2(3, 0) = 0 close(3) = 0 utimensat(0, NULL, NULL, 0) = -1 EACCES (Permission denied) close(0) = 0 But "testfile" creation is successful with owner as root user, -rw-r--r-- 1 root trusted 0 Jan 19 13:51 testfile I can understand that fuse application is running in root level, file creation happened with the owner as root. Because of that test user cannot perform any operation on "testfile". My question: Since I have given "allow_other" while mounting, why test user cannot having privileges to access the "testfile"? Please correct me if my understanding is wrong.
Fuse file system with &quot;default permission&quot; option I am new to fuse. I have mounted fuse by the following command. /mnt/fuse -o default_permissions -o allow_other -o nonempty -o hard_remove –d Now If I login as "test" user and tried to create a file called "testfile". test@11540302:/registration> touch testfile touch: setting times of `testfile': Permission denied Strace output: uname({sys="Linux", node="11540302", ...}) = 0 brk(0) = 0x8055000 brk(0x8076000) = 0x8076000 open("testfile", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK|O_LARGEFILE, 0666) = 3 dup2(3, 0) = 0 close(3) = 0 utimensat(0, NULL, NULL, 0) = -1 EACCES (Permission denied) close(0) = 0 But "testfile" creation is successful with owner as root user, -rw-r--r-- 1 root trusted 0 Jan 19 13:51 testfile I can understand that fuse application is running in root level, file creation happened with the owner as root. Because of that test user cannot perform any operation on "testfile". My question: Since I have given "allow_other" while mounting, why test user cannot having privileges to access the "testfile"? Please correct me if my understanding is wrong.
linux, linux-kernel, filesystems, redhat, fuse
3
3,272
2
https://stackoverflow.com/questions/41572052/fuse-file-system-with-default-permission-option
40,561,644
Where can i find php-pgsql package for RedHat 7?
I am developing my project with PHP 5.6.27 PostgreSQL 9.6.1 RedHat 7 OS I had searched for the php_pgsql package everywhere. also tried with some rpms. but still I am not able to get the package. I developed my whole project in php_pgsql package in windows and i faced this issue when tried to shift from Windows to Linux. please help to solve this. thank you.
Where can i find php-pgsql package for RedHat 7? I am developing my project with PHP 5.6.27 PostgreSQL 9.6.1 RedHat 7 OS I had searched for the php_pgsql package everywhere. also tried with some rpms. but still I am not able to get the package. I developed my whole project in php_pgsql package in windows and i faced this issue when tried to shift from Windows to Linux. please help to solve this. thank you.
php, postgresql, redhat, php-pgsql
3
3,078
2
https://stackoverflow.com/questions/40561644/where-can-i-find-php-pgsql-package-for-redhat-7
32,404,733
Bash cd into subdirectories with increment name
I have a list of folders in current directory with name "S01.result" up to "S15.result", amongst other stuff. I'm trying to write a script that cd into each folder with name pattern "sXX.result" and do something within each subdirectory. This is what I'm trying: ext = ".result" echo -n "Enter the number of your first subject." read start echo -n "Enter the number of your last subject. " read end for i in {start..end}; do if [[i < 10]]; then name = "s0$i&ext" echo $name else name = "s$i$ext" echo $name fi #src is the path of current directory if [ -d "$src/$name" ]; then cd "$src/$name" #do some other things here fi done Am I concatenating the filename correctly and am I finding the subdirectory correctly? Is there any better way to do it?
Bash cd into subdirectories with increment name I have a list of folders in current directory with name "S01.result" up to "S15.result", amongst other stuff. I'm trying to write a script that cd into each folder with name pattern "sXX.result" and do something within each subdirectory. This is what I'm trying: ext = ".result" echo -n "Enter the number of your first subject." read start echo -n "Enter the number of your last subject. " read end for i in {start..end}; do if [[i < 10]]; then name = "s0$i&ext" echo $name else name = "s$i$ext" echo $name fi #src is the path of current directory if [ -d "$src/$name" ]; then cd "$src/$name" #do some other things here fi done Am I concatenating the filename correctly and am I finding the subdirectory correctly? Is there any better way to do it?
bash, file, increment, redhat, cd
3
242
1
https://stackoverflow.com/questions/32404733/bash-cd-into-subdirectories-with-increment-name
32,322,747
Install another php version
Currently, I have PHP 5.2.11 . We have a new project to be deployed which uses Symfony Framework and uses a higher php version. We can't upgrade to a newer version since there are a lot of running application which uses the lower version. So in this case, I need to install another PHP version for it to run. I already searched for some tutorials yet I ended up confused. I would really appreciate your help with this. I'm using Red Hat Enterprise Linux AS release 4 by the way. Thanks in advance.
Install another php version Currently, I have PHP 5.2.11 . We have a new project to be deployed which uses Symfony Framework and uses a higher php version. We can't upgrade to a newer version since there are a lot of running application which uses the lower version. So in this case, I need to install another PHP version for it to run. I already searched for some tutorials yet I ended up confused. I would really appreciate your help with this. I'm using Red Hat Enterprise Linux AS release 4 by the way. Thanks in advance.
php, symfony, redhat
3
807
1
https://stackoverflow.com/questions/32322747/install-another-php-version
32,146,230
lapack/blas-related error when trying to add scipy to miniconda-installed copy of python 2.7.10 on RedHat 6
I have two versions of python on my RedHat 6 machine: 2.7.8 that came with the system originally and 2.7.10 that I've installed using miniconda for a project. I have to use the newer version to run some demo code for another project. The demo script produced this error: ImportError: No module named scipy.sparse Running pip install scipy failed with a bunch of warnings and then this: numpy.distutils.system_info.NotFoundError: no lapack/blas resources found numpy is already installed. That is, I get Requirement already satisfied when trying to pip install numpy . The yum install command recommended on the SciPy install page completed fine but did not help, probably because this python version is installed at a non-default location. Same result after building blas and lapack from source as described here How do I get scipy to install properly? Thx
lapack/blas-related error when trying to add scipy to miniconda-installed copy of python 2.7.10 on RedHat 6 I have two versions of python on my RedHat 6 machine: 2.7.8 that came with the system originally and 2.7.10 that I've installed using miniconda for a project. I have to use the newer version to run some demo code for another project. The demo script produced this error: ImportError: No module named scipy.sparse Running pip install scipy failed with a bunch of warnings and then this: numpy.distutils.system_info.NotFoundError: no lapack/blas resources found numpy is already installed. That is, I get Requirement already satisfied when trying to pip install numpy . The yum install command recommended on the SciPy install page completed fine but did not help, probably because this python version is installed at a non-default location. Same result after building blas and lapack from source as described here How do I get scipy to install properly? Thx
python-2.7, scipy, redhat, lapack, blas
3
1,436
3
https://stackoverflow.com/questions/32146230/lapack-blas-related-error-when-trying-to-add-scipy-to-miniconda-installed-copy-o
30,236,584
Install old version of firefox
I want to install Firefox 24.2.0 in Scientific Linux environment. I have tried firefox-24.5.0-1.mga4.x86_64.rpm , but it did not work. What is the correct linux command for that?
Install old version of firefox I want to install Firefox 24.2.0 in Scientific Linux environment. I have tried firefox-24.5.0-1.mga4.x86_64.rpm , but it did not work. What is the correct linux command for that?
linux, firefox, installation, redhat
3
5,487
2
https://stackoverflow.com/questions/30236584/install-old-version-of-firefox
27,864,667
can a java service bring down the hosting Unix box?
I was checking my Java service on a Redhat Unix box but only found that the box's SSH didn't work and the SA explained that the machine ran out of memory and started consuming swap space, which eventually led the machine to hang and shutdown. he implied that my java service bring down the server, I have hard time believing how this could be possible - as I think in the worst case scenario, the service will throw OutOfMemory exception and only crash itself. my java memory setting is: "-Xms1g -Xmx5g" and from the /proc/meminfo it shows the box has MemTotal: 16304084 kB MemFree: 12288796 kB A second question is, can we look into some log under /var/log to find out what's the real problem is?
can a java service bring down the hosting Unix box? I was checking my Java service on a Redhat Unix box but only found that the box's SSH didn't work and the SA explained that the machine ran out of memory and started consuming swap space, which eventually led the machine to hang and shutdown. he implied that my java service bring down the server, I have hard time believing how this could be possible - as I think in the worst case scenario, the service will throw OutOfMemory exception and only crash itself. my java memory setting is: "-Xms1g -Xmx5g" and from the /proc/meminfo it shows the box has MemTotal: 16304084 kB MemFree: 12288796 kB A second question is, can we look into some log under /var/log to find out what's the real problem is?
java, unix, out-of-memory, redhat
3
123
1
https://stackoverflow.com/questions/27864667/can-a-java-service-bring-down-the-hosting-unix-box