четверг, 9 ноября 2017 г.

ceph mgr RuntimeError: no certificate configured



2017-11-10 12:46:08.605552 7fcc51f1d700  0 mgr[restful] Traceback (most recent call last):
  File "/usr/lib64/ceph/mgr/restful/module.py", line 248, in serve
    self._serve()
  File "/usr/lib64/ceph/mgr/restful/module.py", line 299, in _serve
    raise RuntimeError('no certificate configured')
RuntimeError: no certificate configured


workaround:

ceph mgr module disable restful
OR
Create certificate:
ceph restful create-self-signed-cert

среда, 16 августа 2017 г.

Disable JDE password expire script

-- disable JDE password expire
alter profile DEFAULT limit password_life_time UNLIMITED;
alter profile DEFAULT limit password_grace_time UNLIMITED;
select resource_name,limit from dba_profiles where profile='DEFAULT';
---
spool /tmp/userUnLock.log
select username,account_status,to_char(EXPIRY_DATE, 'yyyy-mm-dd')Expire from dba_users;
alter user PS910DTA identified by PS910DTA;
alter user PS910CTL identified by PS910CTL;
alter user PS910 identified by PS910;
alter user TESTDTA identified by TESTDTA;
alter user TESTCTL identified by TESTCTL;
alter user DV910 identified by DV910;
alter user CRPDTA identified by CRPDTA;
alter user CRPCTL identified by CRPCTL;
alter user PY910 identified by PY910;
alter user PRODDTA identified by PRODDTA;
alter user PRODCTL identified by PRODCTL;
alter user PD910 identified by PD910;
alter user DD910 identified by DD910;
alter user OL910 identified by OL910;
alter user SVM910 identified by SVM910;
alter user SY910 identified by SY910;
alter user PRODUSER identified by PRODUSER;
alter user DEVUSER identified by DEVUSER;
alter user APPLEAD identified by APPLEAD;
alter user JDEDBA identified by JDEDBA;
alter user JDE identified by JDE;
alter user OVR_BIPLATFORM identified by ovsadminE1;
alter user OVR_MDS identified by ovsadminE1;
select username,account_status,to_char(EXPIRY_DATE, 'yyyy-mm-dd')Expire from dba_users;
spool off;
exit;

понедельник, 14 августа 2017 г.

Troubleshoot Opennebula

Troubleshoot opennebula


ISSUE:  error: failed to connect to the hypervisor
error: error from service: CheckAuthorization: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network co
nnection was broken.

On host check
virsh -c qemu:///system list

if fail fix /etc/libvirt/libvirtd.conf

auth_unix_ro = "none"
auth_unix_rw = "none"
unix_sock_group = "oneadmin"
unix_sock_ro_perms = "0770"
unix_sock_rw_perms = "0770"

# systemctl restart libvirtd

ISSUE: Command execution fail: 'if [ -x"/var/tmp/one/im/run_probes" ]

On nebula gui host as user oneadmin

$ onehost sync --force

среда, 9 августа 2017 г.

Troubleshoot mgr in ceph Luminous

Note: change ceph1 according your server hostname

Check auth in ceph 

# ceph auth get mgr.ceph1
exported keyring for mgr.ceph1

[mgr.ceph1]
        key = AQDQn4pZkWQjKRAAmEQ45FZgqDeCL5i6ZhfS0g==
        caps mds = "allow *"
        caps mon = "allow profile mgr"
        caps osd = "allow *"

Create auth if needed by command
ceph auth get-or-create mgr.ceph1 mds 'allow *' mon 'allow profile mgr' osd 'allow *'

Check keying file and content in path /var/lib/ceph/mgr/ceph-ceph1

[root@ceph1 ceph-ceph1]# ls -l
total 4
-rw------- 1 ceph ceph 67 Aug  9 12:38 keyring

[root@ceph1 ceph-ceph1]# cat keyring
[mgr.ceph1]
        key = AQDQn4pZkWQjKRAAmEQ45FZgqDeCL5i6ZhfS0g==


Check mgr in systemd

# systemctl | grep mgr

systemctl reset-failed ceph-mgr@ceph1.service
systemctl enable ceph-mgr@ceph1.service
systemctl start ceph-mgr@ceph1.service

понедельник, 24 июля 2017 г.

Configure wwwi and mird on ERA virtual appliance

1. Download and Install latest ESET File Security (for example http://download.eset.com/download/unix/esets.x86_64.rpm.bin)

rpm -ihv esets-4.5.6.x86_64.rpm

2. Configure wwwi and mird modules
/opt/eset/esets/sbin/esets_set --section wwwi --set agent_enabled=yes
/opt/eset/esets/sbin/esets_set --section wwwi --set listen_addr="0.0.0.0"
/opt/eset/esets/sbin/esets_set --section wwwi --set username="yourusername"
/opt/eset/esets/sbin/esets_set --section wwwi --set password="yourpassword"
/opt/eset/esets/sbin/esets_set --section mird --set agent_enabled=yes
/opt/eset/esets/sbin/esets_set --section mird --set listen_addr="0.0.0.0"
/opt/eset/esets/sbin/esets_set --section global --set av_mirror_enabled=yes

3. Import your licence
/opt/eset/esets/sbin/esets_lic --import=/root/ERA-Endpoint.lic

4. Restart service
/etc/init.d/esets restart


Config file location is  /etc/opt/eset/esets/esets.cfg

wwwi default port is 8081 and protocol is https

http://support.eset.com/kb3674/
 

понедельник, 24 апреля 2017 г.

Upgrade ceph kraken to luminous in place

1. Remove old ceph-release from all nodes

[ceph1][DEBUG ] Preparing...                          ########################################
[ceph1][WARNIN]         file /etc/yum.repos.d/ceph.repo from install of ceph-release-1-1.el7.noarch conflicts with file from package ceph-release-1-1.el7.noarch

yum remove ceph-release-1-1.el7.noarch

2. Install software on one ceph1 node

ceph-deploy install --release luminous  ceph1

[ceph1][DEBUG ] Complete!
[ceph1][INFO  ] Running command: ceph --version
[ceph1][DEBUG ] ceph version 12.0.2 (5a1b6b3269da99a18984c138c23935e5eb96f73e)
3. Restart services on ceph1

systemctl restart ceph-mon.target
systemctl restart ceph-mgr.target

4. Install software on other nodes ceph2, ceph3

ceph-deploy install --release luminous  ceph2 ceph3

and restart services

systemctl restart ceph-mon.target
systemctl restart ceph-mgr.target

5. Check ceph status

     health HEALTH_OK
     monmap e3: 3 mons at {ceph1=10.1.16.1:6789/0,ceph2=10.1.16.2:6789/0,ceph3=10.1.16.3:6789/0}
            election epoch 96, quorum 0,1,2 ceph1,ceph2,ceph3
      fsmap e27: 1/1/1 up {0=ceph1=up:active}
        mgr active: ceph1 standbys: ceph3, ceph2
     osdmap e4649: 12 osds: 12 up, 12 in
      pgmap v226087: 1608 pgs, 4 pools, 2765 GB data, 691k objects
            5578 GB used, 5151 GB / 10729 GB avail
                1608 active+clean
  client io 10384 B/s wr, 0 op/s rd, 2 op/s wr

On each node restart OSD consistently, wait until the apopriate OSD's are up

systemctl restart ceph-osd.target


[root@ceph1 ceph]# ceph osd tree
ID WEIGHT   TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 10.47839 root default
-2  3.49280     host ceph1
 0  0.87320         osd.0       up  1.00000          1.00000
 3  0.87320         osd.3       up  1.00000          1.00000
 6  0.87320         osd.6       up  1.00000          1.00000
 9  0.87320         osd.9       up  1.00000          1.00000
-3  3.49280     host ceph2
 1  0.87320         osd.1       up  1.00000          1.00000
 4  0.87320         osd.4       up  1.00000          1.00000
 7  0.87320         osd.7       up  1.00000          1.00000
10  0.87320         osd.10      up  1.00000          1.00000
-4  3.49280     host ceph3
12  0.87320         osd.12      up  1.00000          1.00000
13  0.87320         osd.13      up  1.00000          1.00000
14  0.87320         osd.14      up  1.00000          1.00000
15  0.87320         osd.15      up  1.00000          1.00000

and wait health ok

6. Set osd flag
I have the warning:

health HEALTH_WARN
all OSDs are running luminous or later but the 'require_luminous_osds' osdmap flag is not set

Execute:
ceph osd set require_luminous_osds


7. Restart other services

systemctl restart ceph-mds.target

8. Finally check status
[root@ceph1 ceph]# ceph -s
    cluster 2d856a9e-ff9e-4322-bedb-56f3be26b607
     health HEALTH_OK
     monmap e3: 3 mons at {ceph1=10.1.16.1:6789/0,ceph2=10.1.16.2:6789/0,ceph3=10.1.16.3:6789/0}
            election epoch 96, quorum 0,1,2 ceph1,ceph2,ceph3
      fsmap e33: 1/1/1 up {0=ceph1=up:active}
        mgr active: ceph1 standbys: ceph3, ceph2
     osdmap e4674: 12 osds: 12 up, 12 in
      pgmap v227520: 1608 pgs, 4 pools, 2765 GB data, 691k objects
            5573 GB used, 5156 GB / 10729 GB avail
                1608 active+clean
  client io 17394 B/s wr, 0 op/s rd, 2 op/s wr