Tuesday, August 25, 2020

Create non-cdb into existing home on Exadata Cloud Service

Here is an example json config which creates a new non-cdb database using an existing database home.

 

/var/opt/oracle/dbaasapi/dbaasapi -i createdb2.json

 

 

[root@exacsphx-z0pvn-pehrf1 dbinput]# cat createdb2.json

{

    "object": "db",

    "action": "start",

    "operation": "createdb",

    "params": {

        "nodelist": "",

        "cdb": "no",

        "ohome_name": "OraHome101_12102_dbbp200114_0",

        "bp": "JAN2020",

        "dbname": "gregfz2",

        "edition": "EE_EP",

        "version": "12.1.0.2",

        "adminPassword": "YourSysPasswordHere",

        "charset": "AL32UTF8",

        "ncharset": "AL16UTF16",

        "backupDestination": "NONE"

    },

    "outputfile": "/home/oracle/gregf/dbinput/createdb2.out",

    "FLAGS": ""

}

 

Monday, August 24, 2020

Create non-cdb on Exadata Cloud Service

 Please review MOS Note:  Creating non-CDB databases using Oracle Database 12c on the Exadata Cloud Service (Doc ID 2528257.1)

 

My environment:

  • ExaCS quarter rack in Phoenix. 
 

Step 1:  You may need to update your cloud tooling

Step 2:  You may need to download the non-cdb images

  • Sudo su -
  • dbaascli cswlib list
  • dbaascli cswlib download --version 19000 --bp JAN2020 --cdb no

Step 3:  Create  you non-cdb database

  • /var/opt/oracle/dbaasapi/dbaasapi -i createdb.json

 

[root@exacsphx-z0pvn-pehrf1 dbinput]# cat createdb.json

{

    "object": "db",

    "action": "start",

    "operation": "createdb",

    "params": {

        "nodelist": "",

        "cdb": "no",

        "bp": "JAN2020",

        "dbname": "gregfz1",

        "edition": "EE_EP",

        "version": "19.0.0.0",

        "adminPassword": "YourSysPasswordHere",

        "charset": "AL32UTF8",

        "ncharset": "AL16UTF16",

        "backupDestination": "NONE"

    },

    "outputfile": "/home/oracle/gregf/dbinput/createdb.out",

    "FLAGS": ""

}

Wednesday, August 12, 2020

JSON Search Index on Autonomous Database

 Just a quick test to validate JSON Search Index on ADB.   My environment:

  • ADB Shared 19c

 Below are the commands to create a simple table, insert a row, create the search index and validate the execution plan.

 

create table departments_json (
  department_id   integer not null primary key,
  department_data blob not null
);

alter table departments_json
  add constraint dept_data_json
  check ( department_data is json );
 
insert into departments_json
  values ( 110, utl_raw.cast_to_raw ( '{
  "department": "Accounting",
  "employees": [
    {
      "name": "Higgins, Shelley",
      "job": "Accounting Manager",
      "hireDate": "2002-06-07T00:00:00"
    },
    {
      "name": "Gietz, William",
      "job": "Public Accountant",
      "hireDate": "2002-06-07T00:00:00"
    }
  ]
}' ));

create search index dept_json_i on departments_json ( department_data ) for json;
 
select * from   departments_json d where  json_textcontains ( department_data, '$', 'Public' );

explain plan for select * from   departments_json d where  json_textcontains ( department_data, '$', 'Public' );

SELECT * FROM table(dbms_xplan.display); 

 

Looking at the plan we can see the search index was used.

 

Example came from this helpful blog article


Tuesday, August 11, 2020

Mounting OCI Object Storage Bucket as a File System on Exadata Cloud Service

My environment:

  • ExaCS running Oracle Linux 7.7

Step 1:  Install s3fs-fuse rpm

yum install https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/s/s3fs-fuse-1.86-2.el7.x86_64.rpm --nogpgcheck

Step 2A:  Create file with credentials in the format access key:secret key

[root@vm1 bucket1]# cat /root/.passwd-s3fs
a4dzaa8049cbb6e1300b0a9170ff35ddebd50bfc:fYtbmNmp++lomv4z11YX4PuqR9gPbVEe4ZXB00TKUBA=

Step 2B: Change file permissions to 600 

chmod 600 /root/.passwd-s3fs

Step 3: Create directory for mount

mkdir /bucket1

Step 4:   Run the s3fs command

s3fs gregf1 /bucket1 -o endpoint=us-phoenix-1  -o passwd_file=/root/.passwd-s3fs -o url="https://yourtenancyhere.compat.objectstorage.us-phoenix-1.oraclecloud.com" -o nomultipart -o use_path_request_style

Example:

[root@exacsx7phx-023jw2 ~]# s3fs gregf1 /bucket1 -o endpoint=us-phoenix-1  -o passwd_file=/root/.passwd-s3fs -o url="https://yourtenancynamehere.compat.objectstorage.us-phoenix-1.oraclecloud.com" -o nomultipart -o use_path_request_style
[root@exacsx7phx-023jw2 ~]# df
Filesystem                      1K-blocks       Used    Available Use% Mounted on
devtmpfs                        371297736          0    371297736   0% /dev
tmpfs                           742617088    1262064    741355024   1% /dev/shm
tmpfs                           371308876       3044    371305832   1% /run
tmpfs                           371308876          0    371308876   0% /sys/fs/cgroup
/dev/mapper/VGExaDb-LVDbSys1     24639868   11840312     11524884  51% /
/dev/xvda1                         499656      52724       420720  12% /boot
/dev/xvdi                      1135204408  112486384    965029960  11% /u02
/dev/mapper/VGExaDb-LVDbOra1    154687468    6198624    140608140   5% /u01
/dev/xvdb                        51475068   13514280     35322964  28% /u01/app/19.0.0.0/grid
tmpfs                            74261776          0     74261776   0% /run/user/0
/dev/asm/acfsvol01-131          838860800  139691116    699169684  17% /acfs01
/dev/asm/upload_vol-131        8388608000 4577187480   3811420520  55% /scratch
tmpfs                            74261776          0     74261776   0% /run/user/2000
s3fs                         274877906944          0 274877906944   0% /bucket1
tmpfs                            74261776          0     74261776   0% /run/user/1001
[root@exacsx7phx-023jw2 ~]# ls /bucket1
fish121.dmp  fish.dmp  FISH_EXPDAT.DMP

 

Step 5:  Make mount permanent

 

Additional Information:




Monday, August 10, 2020

Mounting OCI Object Storage Bucket as a File System on Oracle Linux VM

My environment:

  • VM on OCI running Oracle Linux 7.8

Step 1:  Install s3fs-fuse rpm

yum install https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/s/s3fs-fuse-1.86-2.el7.x86_64.rpm --nogpgcheck

Step 2A:  Create file with credentials in the format access key:secret key

[root@vm1 bucket1]# cat /root/.passwd-s3fs
a4dzaa8049cbb6e1300b0a9170ff35ddebd50bfc:fYtbmNmp++lomv4z11YX4PuqR9gPbVEe4ZXB00TKUBA=

Step 2B: Change file permissions to 600 

chmod 600 /root/.passwd-s3fs

Step 3: Create directory for mount

mkdir /bucket1

Step 4:   Run the s3fs command

s3fs gregf1 /bucket1 -o endpoint=us-phoenix-1  -o passwd_file=/root/.passwd-s3fs -o url="https://yourtenancyhere.compat.objectstorage.us-phoenix-1.oraclecloud.com" -o nomultipart -o use_path_request_style


Step 5:  Make mount permanent

 

Additional Information:





Tuesday, July 14, 2020

Snapshots with Exadata Cloud

Currently the OCI cloud tooling does not support cloning or snapshots however you can do these manually.   Below are a couple of examples of using SQL to create PDB snapshots on the Exadata Cloud Service or Exadata Cloud at Customer services.





Example #1 -- Creating PDB Snapshot using Full Test Masters
Scenario is to create a test master full copy of production, developers then create snapshots from the test master

Show pdbs

/* Create Full test master PDB - note this would usually be a remote clone*/
create pluggable database prod1tm1 from prod1 keystore identified by "WELcome__2019";

show pdbs;

/* Need to open Test Master read write before opening it read only */
alter pluggable database prod1tm1 open   instances=all;
alter pluggable database prod1tm1 close immediate instances=all;
alter pluggable database prod1tm1 open read only instances=all;  /* read only test master pdb */

/* Create  First PDB snapshot from Test Master */

create pluggable database tm1snap1 from prod1tm1 tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";

alter pluggable database tm1snap1 open instances=all;
alter session set container=tm1snap1;

create table panda as select * from dba_users;
select * from panda;

/* Create second snapshot from same test master */
create pluggable database tm1snap2 from prod1tm1 tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";
alter pluggable database tm1snap2 open instances=all;

alter session set container=cdb$root;
Show pdbs

/* query shows parent of snapshots */
column name format a20
select CON_ID, NAME, OPEN_MODE, SNAPSHOT_PARENT_CON_ID from v$pdbs;

alter pluggable database tm1snap1 close immediate instances=all;
alter pluggable database tm1snap2 close immediate instances=all;
alter pluggable database prod1tm1 close immediate instances=all;
drop pluggable database tm1snap1 including datafiles;
drop pluggable database tm1snap2 including datafiles;
drop pluggable database prod1tm1 including datafiles;



Example #2 -- Creating PDB Snapshots using Sparse  Test Masters
Scenario is to create one test master full copy of production, then create  sparse test masters updated each night from prod using GG, developers then create snapshots from the test masters.  Provide a new copy of production each night for the developers to create snaps from, full test master for Monday , sparse test masters Tuesday thru Friday.
Note: Using the same name for the test master each night so that the GG configuration doesn't have to change

/* create full copy of test master from prod, usually this would be a remote clone */
create pluggable database prod2tm from prod2 keystore identified by "WELcome__2019";
alter pluggable database prod2tm open  instances=all;

/* setup GG , catch up test master with production */

/* stop GG  when it is time to create a new test master*/

Alter pluggable database prod2tm close immediate instances=all;
alter pluggable database prod2tm unplug into '/home/oracle/snapshot/prod2tm_monday.xml' encrypt using "zzz";
drop pluggable database prod2tm keep datafiles;

/* create full copy of  test master*/
create pluggable database prod2tm_monday using '/home/oracle/snapshot/prod2tm_monday.xml' nocopy keystore identified by "WELcome__2019" decrypt using "zzz";

/* open full copy of test master read only,    need to open read/write first then read only */
alter pluggable database prod2tm_monday open instances=all;
alter pluggable database prod2tm_monday close immediate instances=all;
alter pluggable database prod2tm_monday  open read only instances=all;

/* create next days test master as a sparse */
create pluggable database prod2tm from prod2tm_monday tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";
alter pluggable database prod2tm open  instances=all;

/* sync test master to prod*/
/* Restart GG */

/* developer creates sparse pdb to use */
create pluggable database prod2tm_monday_greg from prod2tm_monday tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";
alter pluggable database prod2tm_Monday_greg open instances=all;

show pdbs
column name format a20
select CON_ID, NAME, OPEN_MODE, SNAPSHOT_PARENT_CON_ID from v$pdbs;

/* repeat process for each day's  sparse test master */

alter pluggable database prod2tm close immediate instances=all;
alter pluggable database prod2tm_monday close immediate instances=all;
alter pluggable database prod2tm_Monday_greg close immediate instances=all;
drop pluggable database prod2tm including datafiles;
drop pluggable database prod2tm_monday including datafiles;
drop pluggable database prod2tm_Monday_greg including datafiles;

rm /home/oracle/snapshot/*.xml


ASM

sqlplus / as sysasm
ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'ACCESS_CONTROL.ENABLED' = 'TRUE';

Saturday, July 11, 2020

Querying External Data with Autonomous Database

Below is an example of querying an externaltext file with a comma delimiter on OCI object storage using an Autonomous Database.  This example is from the documentation which had a couple of typos that are corrected here.

I am using an ADB on Dedicated Exadata Infrastructure.

channels.txt file:
1,Direct Sales,Direct
2,Tele Sales,Direct
3,Catalog,Indirect
4,Internet,Indirect
5,Partners,Others


BEGIN
  DBMS_CREDENTIAL.CREATE_CREDENTIAL(
    credential_name => 'DEF_CRED_NAME',
    username => 'your OCI user name here',
    password => 'your token here'
  );
END;
/

select owner, credential_name, username, enabled from dba_credentials;

BEGIN
   DBMS_CLOUD.CREATE_EXTERNAL_TABLE(
    table_name =>'CHANNELS_EXT',
    credential_name =>'DEF_CRED_NAME',
    file_uri_list =>'https://objectstorage.us-ashburn-1.oraclecloud.com/n/youtenancynamehere/b/sampledata/o/channels.txt',
    format => json_object('delimiter' value ','),
    column_list => 'CHANNEL_ID NUMBER,
    CHANNEL_DESC VARCHAR2(20),
    CHANNEL_CLASS VARCHAR2(20)' );
END;
/

SELECT count(*) FROM channels_ext;
select * from channels_ext;
desc channels_ext;

Thursday, July 9, 2020

Creating an ACFS filesystem on Exadata Cloud Service

Below is an example on how to create an ACFS filesystem on an Exadata Cloud Service quarter rack.  This example creates a 2TB ACFS in the RECO1 disk group and is mounted at /scratch.



[opc@exacsphx-xyzn1 ~]$ sudo -s
[root@exacsphx-xyzn1 opc]# su - grid
Last login: Thu Oct 17 09:38:15 EDT 2019
[grid@exacsphx-xyzn1 ~]$ asmcmd
ASMCMD> volcreate -G RECOC1 -s 2000G upload_vol
ASMCMD> volinfo -G RECOC1 upload_vol
Diskgroup Name: RECOC1

         Volume Name: UPLOAD_VOL
         Volume Device: /dev/asm/upload_vol-58
         State: ENABLED
         Size (MB): 2048000
         Resize Unit (MB): 64
         Redundancy: HIGH
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage:
         Mountpath:

ASMCMD> exit
[grid@exacsphx-xyzn1 ~]$ logout

[root@exacsphx-xyzn1 opc]# /sbin/mkfs -t acfs /dev/asm/upload_vol-58
mkfs.acfs: version                   = 19.0.0.0.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/upload_vol-58
mkfs.acfs: volume size               = 2147483648000  (   1.95 TB )
mkfs.acfs: Format complete.
[root@exacsphx-xyzn1 opc]# /u01/app/19.0.0.0/grid/bin/srvctl add filesystem -d /dev/asm/upload_vol-58 -g RECOC1 -v upload_vol -m /scratch
[root@exacsphx-xyzn1 opc]# /u01/app/19.0.0.0/grid/bin/srvctl start filesystem -d /dev/asm/upload_vol-58

[root@exacsphx-xyzn1 opc]# df -h
Filesystem                    Size  Used Avail Use% Mounted on
devtmpfs                      355G     0  355G   0% /dev
tmpfs                         709G  1.5G  707G   1% /dev/shm
tmpfs                         355G  3.0M  355G   1% /run
tmpfs                         355G     0  355G   0% /sys/fs/cgroup
/dev/mapper/VGExaDb-LVDbSys1   24G   11G   12G  47% /
/dev/xvda1                    488M   82M  381M  18% /boot
/dev/xvdi                     1.1T   69G  959G   7% /u02
/dev/mapper/VGExaDb-LVDbOra1   20G  7.4G   12G  40% /u01
/dev/xvdb                      50G  9.3G   38G  20% /u01/app/19.0.0.0/grid
/dev/asm/acfsvol01-58         800G   39G  762G   5% /acfs01
tmpfs                          71G     0   71G   0% /run/user/2000
tmpfs                          71G     0   71G   0% /run/user/0
/dev/asm/upload_vol-58        2.0T  4.6G  2.0T   1% /scratch
[root@exacsphx-xyzn1 opc]#

Tuesday, July 7, 2020

Autonomous Database with HammerDB

Summary of the basic steps to configure HammerDB with Oracle's Autonomous Database either shared or dedicated.
  1. Download and install Oracle Database Client on client machine which is hosting HammerDB
  2. Optionally it is recommended practice to patch the client to the latest version, today that is 19.7
  3. set environment variable Oracle_HOME
  4. Download the wallet for the database you will be using with HammerDB
  5. Unzip the wallet , modify sqlnet.ora and set TNS_ADMIN to your wallet
  6. Modify oracle.xml file in your hammerdb/config directory
  7. Verify above settings
My Environment:
  • Windows jump server
  • Linux Compute VM to host hammerdb
  • ADB on Dedicated Quarter Rack
  • ADB on Shared Infrastructure
  • all on same VCN
Steps:

#1   Download and install latest Oracle Database Client which currently is 19.3 onto your linux compute VM.



#2 Skipping this step

#3   Set the environment variable ORACLE_HOME to your client on .
       i.e.   export ORACLE_HOME=/home/opc/app/opc/product/19.0.0/client_1

#4   Download the wallet  for the database you will be using onto  your linux compute VM

  










#5  Unzip the wallet onto your linux compute VM and set TNS_ADMIN to this directory
        export TNS_ADMIN=/home/opc/wallet_phxhammer1

 

Also modify the sqlnet.ora to point to this directory.


#6  Either modify the oracle.xml in  the config directory within the hammer directory or use the GUI.   Make the following changes in config.xml:
  • system_user = admin
  • system_password = your_admin_password
  • instance = use one of the defined services   i.e.  phxhammer1_tp
  • tpcc_user = tpcc
  • tpcc_pass = insert password for the tpcc user
  • tpcc_def_tab = DATA
  • tpcc_def_temp= TEMP 
Here is a portion of the config file:





#7 start HammerDB  and verify connection