Saturday, July 18, 2020

Using File System Service with Exadata Cloud Service

File System Service is basically a NFS server style service.   The simplest example is to create your FSS in the same region and VCN as where the server is location that you would like to mount the filesystem on.  Below is the screenshot of creating a filesystem with FSS.

Documentation:  Create a FSSMounting a FSS



Once the above is completed click on this new service, then click on the export path and then on  "Mount Commands" button which provides the commands to use on the target where you would like to mount this filesystem.   Here is a screenshot of the mount commands:



On my ExaCS  the nfs-utils was already installed and I used different names for the mount points.  Here are the commands I used.

sudo su -
mkdir /scratch3
mount 10.0.0.49:/fss2exacs /scratch3

and then update your /etc/fstab so that it is mounted each time the server starts.




You will need to do this on each compute node of your ExaCS that you would like to access this filesystem.

Tuesday, July 14, 2020

Snapshots with Exadata Cloud

Currently the OCI cloud tooling does not support cloning or snapshots however you can do these manually.   Below are a couple of examples of using SQL to create PDB snapshots on the Exadata Cloud Service or Exadata Cloud at Customer services.





Example #1 -- Creating PDB Snapshot using Full Test Masters
Scenario is to create a test master full copy of production, developers then create snapshots from the test master

Show pdbs

/* Create Full test master PDB - note this would usually be a remote clone*/
create pluggable database prod1tm1 from prod1 keystore identified by "WELcome__2019";

show pdbs;

/* Need to open Test Master read write before opening it read only */
alter pluggable database prod1tm1 open   instances=all;
alter pluggable database prod1tm1 close immediate instances=all;
alter pluggable database prod1tm1 open read only instances=all;  /* read only test master pdb */

/* Create  First PDB snapshot from Test Master */

create pluggable database tm1snap1 from prod1tm1 tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";

alter pluggable database tm1snap1 open instances=all;
alter session set container=tm1snap1;

create table panda as select * from dba_users;
select * from panda;

/* Create second snapshot from same test master */
create pluggable database tm1snap2 from prod1tm1 tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";
alter pluggable database tm1snap2 open instances=all;

alter session set container=cdb$root;
Show pdbs

/* query shows parent of snapshots */
column name format a20
select CON_ID, NAME, OPEN_MODE, SNAPSHOT_PARENT_CON_ID from v$pdbs;

alter pluggable database tm1snap1 close immediate instances=all;
alter pluggable database tm1snap2 close immediate instances=all;
alter pluggable database prod1tm1 close immediate instances=all;
drop pluggable database tm1snap1 including datafiles;
drop pluggable database tm1snap2 including datafiles;
drop pluggable database prod1tm1 including datafiles;



Example #2 -- Creating PDB Snapshots using Sparse  Test Masters
Scenario is to create one test master full copy of production, then create  sparse test masters updated each night from prod using GG, developers then create snapshots from the test masters.  Provide a new copy of production each night for the developers to create snaps from, full test master for Monday , sparse test masters Tuesday thru Friday.
Note: Using the same name for the test master each night so that the GG configuration doesn't have to change

/* create full copy of test master from prod, usually this would be a remote clone */
create pluggable database prod2tm from prod2 keystore identified by "WELcome__2019";
alter pluggable database prod2tm open  instances=all;

/* setup GG , catch up test master with production */

/* stop GG  when it is time to create a new test master*/

Alter pluggable database prod2tm close immediate instances=all;
alter pluggable database prod2tm unplug into '/home/oracle/snapshot/prod2tm_monday.xml' encrypt using "zzz";
drop pluggable database prod2tm keep datafiles;

/* create full copy of  test master*/
create pluggable database prod2tm_monday using '/home/oracle/snapshot/prod2tm_monday.xml' nocopy keystore identified by "WELcome__2019" decrypt using "zzz";

/* open full copy of test master read only,    need to open read/write first then read only */
alter pluggable database prod2tm_monday open instances=all;
alter pluggable database prod2tm_monday close immediate instances=all;
alter pluggable database prod2tm_monday  open read only instances=all;

/* create next days test master as a sparse */
create pluggable database prod2tm from prod2tm_monday tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";
alter pluggable database prod2tm open  instances=all;

/* sync test master to prod*/
/* Restart GG */

/* developer creates sparse pdb to use */
create pluggable database prod2tm_monday_greg from prod2tm_monday tempfile reuse create_file_dest='+SPRC1' snapshot copy keystore identified by "WELcome__2019";
alter pluggable database prod2tm_Monday_greg open instances=all;

show pdbs
column name format a20
select CON_ID, NAME, OPEN_MODE, SNAPSHOT_PARENT_CON_ID from v$pdbs;

/* repeat process for each day's  sparse test master */

alter pluggable database prod2tm close immediate instances=all;
alter pluggable database prod2tm_monday close immediate instances=all;
alter pluggable database prod2tm_Monday_greg close immediate instances=all;
drop pluggable database prod2tm including datafiles;
drop pluggable database prod2tm_monday including datafiles;
drop pluggable database prod2tm_Monday_greg including datafiles;

rm /home/oracle/snapshot/*.xml


ASM

sqlplus / as sysasm
ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'ACCESS_CONTROL.ENABLED' = 'TRUE';