Tuesday, August 25, 2020

Create non-cdb into existing home on Exadata Cloud Service

Here is an example json config which creates a new non-cdb database using an existing database home.

 

/var/opt/oracle/dbaasapi/dbaasapi -i createdb2.json

 

 

[root@exacsphx-z0pvn-pehrf1 dbinput]# cat createdb2.json

{

    "object": "db",

    "action": "start",

    "operation": "createdb",

    "params": {

        "nodelist": "",

        "cdb": "no",

        "ohome_name": "OraHome101_12102_dbbp200114_0",

        "bp": "JAN2020",

        "dbname": "gregfz2",

        "edition": "EE_EP",

        "version": "12.1.0.2",

        "adminPassword": "YourSysPasswordHere",

        "charset": "AL32UTF8",

        "ncharset": "AL16UTF16",

        "backupDestination": "NONE"

    },

    "outputfile": "/home/oracle/gregf/dbinput/createdb2.out",

    "FLAGS": ""

}

 

Monday, August 24, 2020

Create non-cdb on Exadata Cloud Service

 Please review MOS Note:  Creating non-CDB databases using Oracle Database 12c on the Exadata Cloud Service (Doc ID 2528257.1)

 

My environment:

  • ExaCS quarter rack in Phoenix. 
 

Step 1:  You may need to update your cloud tooling

Step 2:  You may need to download the non-cdb images

  • Sudo su -
  • dbaascli cswlib list
  • dbaascli cswlib download --version 19000 --bp JAN2020 --cdb no

Step 3:  Create  you non-cdb database

  • /var/opt/oracle/dbaasapi/dbaasapi -i createdb.json

 

[root@exacsphx-z0pvn-pehrf1 dbinput]# cat createdb.json

{

    "object": "db",

    "action": "start",

    "operation": "createdb",

    "params": {

        "nodelist": "",

        "cdb": "no",

        "bp": "JAN2020",

        "dbname": "gregfz1",

        "edition": "EE_EP",

        "version": "19.0.0.0",

        "adminPassword": "YourSysPasswordHere",

        "charset": "AL32UTF8",

        "ncharset": "AL16UTF16",

        "backupDestination": "NONE"

    },

    "outputfile": "/home/oracle/gregf/dbinput/createdb.out",

    "FLAGS": ""

}

Wednesday, August 12, 2020

JSON Search Index on Autonomous Database

 Just a quick test to validate JSON Search Index on ADB.   My environment:

  • ADB Shared 19c

 Below are the commands to create a simple table, insert a row, create the search index and validate the execution plan.

 

create table departments_json (
  department_id   integer not null primary key,
  department_data blob not null
);

alter table departments_json
  add constraint dept_data_json
  check ( department_data is json );
 
insert into departments_json
  values ( 110, utl_raw.cast_to_raw ( '{
  "department": "Accounting",
  "employees": [
    {
      "name": "Higgins, Shelley",
      "job": "Accounting Manager",
      "hireDate": "2002-06-07T00:00:00"
    },
    {
      "name": "Gietz, William",
      "job": "Public Accountant",
      "hireDate": "2002-06-07T00:00:00"
    }
  ]
}' ));

create search index dept_json_i on departments_json ( department_data ) for json;
 
select * from   departments_json d where  json_textcontains ( department_data, '$', 'Public' );

explain plan for select * from   departments_json d where  json_textcontains ( department_data, '$', 'Public' );

SELECT * FROM table(dbms_xplan.display); 

 

Looking at the plan we can see the search index was used.

 

Example came from this helpful blog article


Tuesday, August 11, 2020

Mounting OCI Object Storage Bucket as a File System on Exadata Cloud Service

My environment:

  • ExaCS running Oracle Linux 7.7

Step 1:  Install s3fs-fuse rpm

yum install https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/s/s3fs-fuse-1.86-2.el7.x86_64.rpm --nogpgcheck

Step 2A:  Create file with credentials in the format access key:secret key

[root@vm1 bucket1]# cat /root/.passwd-s3fs
a4dzaa8049cbb6e1300b0a9170ff35ddebd50bfc:fYtbmNmp++lomv4z11YX4PuqR9gPbVEe4ZXB00TKUBA=

Step 2B: Change file permissions to 600 

chmod 600 /root/.passwd-s3fs

Step 3: Create directory for mount

mkdir /bucket1

Step 4:   Run the s3fs command

s3fs gregf1 /bucket1 -o endpoint=us-phoenix-1  -o passwd_file=/root/.passwd-s3fs -o url="https://yourtenancyhere.compat.objectstorage.us-phoenix-1.oraclecloud.com" -o nomultipart -o use_path_request_style

Example:

[root@exacsx7phx-023jw2 ~]# s3fs gregf1 /bucket1 -o endpoint=us-phoenix-1  -o passwd_file=/root/.passwd-s3fs -o url="https://yourtenancynamehere.compat.objectstorage.us-phoenix-1.oraclecloud.com" -o nomultipart -o use_path_request_style
[root@exacsx7phx-023jw2 ~]# df
Filesystem                      1K-blocks       Used    Available Use% Mounted on
devtmpfs                        371297736          0    371297736   0% /dev
tmpfs                           742617088    1262064    741355024   1% /dev/shm
tmpfs                           371308876       3044    371305832   1% /run
tmpfs                           371308876          0    371308876   0% /sys/fs/cgroup
/dev/mapper/VGExaDb-LVDbSys1     24639868   11840312     11524884  51% /
/dev/xvda1                         499656      52724       420720  12% /boot
/dev/xvdi                      1135204408  112486384    965029960  11% /u02
/dev/mapper/VGExaDb-LVDbOra1    154687468    6198624    140608140   5% /u01
/dev/xvdb                        51475068   13514280     35322964  28% /u01/app/19.0.0.0/grid
tmpfs                            74261776          0     74261776   0% /run/user/0
/dev/asm/acfsvol01-131          838860800  139691116    699169684  17% /acfs01
/dev/asm/upload_vol-131        8388608000 4577187480   3811420520  55% /scratch
tmpfs                            74261776          0     74261776   0% /run/user/2000
s3fs                         274877906944          0 274877906944   0% /bucket1
tmpfs                            74261776          0     74261776   0% /run/user/1001
[root@exacsx7phx-023jw2 ~]# ls /bucket1
fish121.dmp  fish.dmp  FISH_EXPDAT.DMP

 

Step 5:  Make mount permanent

 

Additional Information:




Monday, August 10, 2020

Mounting OCI Object Storage Bucket as a File System on Oracle Linux VM

My environment:

  • VM on OCI running Oracle Linux 7.8

Step 1:  Install s3fs-fuse rpm

yum install https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/s/s3fs-fuse-1.86-2.el7.x86_64.rpm --nogpgcheck

Step 2A:  Create file with credentials in the format access key:secret key

[root@vm1 bucket1]# cat /root/.passwd-s3fs
a4dzaa8049cbb6e1300b0a9170ff35ddebd50bfc:fYtbmNmp++lomv4z11YX4PuqR9gPbVEe4ZXB00TKUBA=

Step 2B: Change file permissions to 600 

chmod 600 /root/.passwd-s3fs

Step 3: Create directory for mount

mkdir /bucket1

Step 4:   Run the s3fs command

s3fs gregf1 /bucket1 -o endpoint=us-phoenix-1  -o passwd_file=/root/.passwd-s3fs -o url="https://yourtenancyhere.compat.objectstorage.us-phoenix-1.oraclecloud.com" -o nomultipart -o use_path_request_style


Step 5:  Make mount permanent

 

Additional Information:





Saturday, July 18, 2020

Using File System Service with Exadata Cloud Service

File System Service is basically a NFS server style service.   The simplest example is to create your FSS in the same region and VCN as where the server is location that you would like to mount the filesystem on.  Below is the screenshot of creating a filesystem with FSS.

Documentation:  Create a FSSMounting a FSS



Once the above is completed click on this new service, then click on the export path and then on  "Mount Commands" button which provides the commands to use on the target where you would like to mount this filesystem.   Here is a screenshot of the mount commands:



On my ExaCS  the nfs-utils was already installed and I used different names for the mount points.  Here are the commands I used.

sudo su -
mkdir /scratch3
mount 10.0.0.49:/fss2exacs /scratch3

and then update your /etc/fstab so that it is mounted each time the server starts.




You will need to do this on each compute node of your ExaCS that you would like to access this filesystem.