Thursday, June 2, 2022

Scaling OCPUs in Exadata Cloud

Three main ways you can scale the number of OCPUs in Exadata Cloud Service or Cloud at Customer:

  1. OCI Console
  2. REST api
  3. OCI Command Line Interface.  
Here are four examples on how to script this scaling using either the REST api via Curl or the OCI CLI.
  1. "Auto Scaling" Script
  2. Curl 
  3. OCI CLI
  4. dbaascli
Below you find details on each of the examples.

1.  "Auto Scaling" Script

This script can be download from Oracle Support via Scale-up and Scale-down automation utility for OCI DB System (ExaCS/ExaCC) (Doc ID 2719916.1)

Oracle DynamicScaling utility can be executed as standalone executable or as daemon on one or more ExaCS compute nodes or ExaC@C vmcluster nodes. By default DynamicScaling is monitoring the CPUs with very limited host impact and if the load goes over the Maximum CPU threshold ("--maxthreshold") for an interval of time ("--interval"), it will automatically will scale-up the OCPU by a factor ("--ocpu") till a maximum limit ("--maxocpu"). If the load goes under the Minimum CPU threshold ("--minthreshold") for an interval of time ("--interval") scale down will be executed util the minimum limit ("--minocpu") of ocpu. If a valid cluster filesystem (ACFS) is provided, DynamicScaling will consider the load of all nodes (where DynamicScaling is running) and it will perform a scale-UP/DOWN based on the cluster node load (average/max).   There are several examples in this doc.  

2.  Curl bash script

With using curl I find it easier to start with a "http get" method and make sure this works before moving on to using a "http put".    This is a nice blog article on describing where to find each of these values.    This is the api documentation for cloudVmClusters and UpdateCloudVmCluster.

Below is example "http get" script where you will need to change the tenancy_ocid, user_ocid, privateKeyPath, fingerprint, rest_api and host. 

#!/bin/bash

########################## Fill these in with your values ##########################
#OCID of the tenancy calls are being made in to
tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaactsgmdorji3bujtx6gbchuvbblbr3ddpqxxp5o3f2dxxxxxxxxxx"

# OCID of the user making the rest call
user_ocid="ocid1.user.oc1..aaaaaaaagt7makfurd57pdcwdtqh3l3yxbvymvwiei63ut3sqpxxxxxxxxxx"

# path to the private PEM format key for this user
privateKeyPath="/home/opc/greg.pem"

# fingerprint of the private key for this user
fingerprint="d2:7d:2e:22:3e:a8:8a:df:b7:d4:12:xx:xx:xx:xx:xx"

# The REST api you want to call, with any required paramters.
rest_api="/20160918/cloudVmClusters/ocid1.cloudvmcluster.oc1.phx.abyhqljrchvzy3scg4i5c4vykzrjszncnramcftjwymxkiaznixxxxxxxxxx"

# The host you want to make the call against
host="database.us-phoenix-1.oraclecloud.com"
####################################################################################


date=`date -u "+%a, %d %h %Y %H:%M:%S GMT"`
date_header="date: $date"
host_header="host: $host"
request_target="(request-target): get $rest_api"
# note the order of items. The order in the signing_string matches the order in the headers
signing_string="$request_target\n$date_header\n$host_header"
headers="(request-target) date host"


#echo "====================================================================================================="
printf '%b' "signing string is $signing_string \n"
signature=`printf '%b' "$signing_string" | openssl dgst -sha256 -sign $privateKeyPath | openssl enc -e -base64 | tr -d '\n'`
printf '%b' "Signed Request is  \n$signature\n"

echo "====================================================================================================="
set -x
curl -v -X GET -sS https://$host$rest_api -H "date: $date" -H "Authorization: Signature version=\"1\",keyId=\"$tenancy_ocid/$user_ocid/$fingerprint\",algorithm=\"rsa-sha256\",headers=\"$headers\",signature=\"$signature\""

Here is a curl script example for a "HTTP PUT".   This script changes the number of OCPUs (cores) at the VM cluster level.  

#!/bin/bash

########################## Fill these in with your values ##########################
#OCID of the tenancy calls are being made in to
tenancy_ocid="ocid1.tenancy.oc1..aaaaaaaactsgmdorji3bujtx6gbchuvbblbr3ddpqxxp5o3f2dxxxxxxxxxx"

# OCID of the user making the rest call
user_ocid="ocid1.user.oc1..aaaaaaaagt7makfurd57pdcwdtqh3l3yxbvymvwiei63ut3sqpxxxxxxxxxx"

# path to the private PEM format key for this user
privateKeyPath="/home/opc/greg.pem"

# fingerprint of the private key for this user
fingerprint="d2:7d:2e:22:3e:a8:8a:df:b7:d4:12:xx:xx:xx:xx:xx"

# The REST api you want to call, with any required paramters.
rest_api="/20160918/cloudVmClusters/ocid1.cloudvmcluster.oc1.phx.abyhqljrchvzy3scg4i5c4vykzrjszncnramcftjwymxkiaznixxxxxxxxxx"

# The host you want to make the call against
host="database.us-phoenix-1.oraclecloud.com"

# the json file containing the data you want to POST to the rest endpoint
body="./request.json"
####################################################################################


# extra headers required for a POST/PUT request
body_arg=(--data-binary @${body})
content_sha256="$(openssl dgst -binary -sha256 < $body | openssl enc -e -base64)";
content_sha256_header="x-content-sha256: $content_sha256"
content_length="$(wc -c < $body | xargs)";
content_length_header="content-length: $content_length"
headers="(request-target) date host"
# add on the extra fields required for a POST/PUT
headers=$headers" x-content-sha256 content-type content-length"
content_type_header="content-type: application/json";

date=`date -u "+%a, %d %h %Y %H:%M:%S GMT"`
date_header="date: $date"
host_header="host: $host"
request_target="(request-target): put $rest_api"

# note the order of items. The order in the signing_string matches the order in the headers, including the extra POST fields
signing_string="$request_target\n$date_header\n$host_header"
# add on the extra fields required for a POST/PUT
signing_string="$signing_string\n$content_sha256_header\n$content_type_header\n$content_length_header"

echo "====================================================================================================="
printf '%b' "signing string is $signing_string \n"
signature=`printf '%b' "$signing_string" | openssl dgst -sha256 -sign $privateKeyPath | openssl enc -e -base64 | tr -d '\n'`
printf '%b' "Signed Request is  \n$signature\n"

echo "====================================================================================================="
set -x
curl -v --data-binary "@./request.json" -X PUT -sS https://$host$rest_api -H "date: $date" -H "x-content-sha256: $content_sha256" -H "content-type: applica       tion/json" -H "content-length: $content_length" -H "Authorization: Signature version=\"1\",keyId=\"$tenancy_ocid/$user_ocid/$fingerprint\",algorithm=\"rsa-       sha256\",headers=\"$headers\",signature=\"$signature\""

This is the file request.json:

{
    "cpuCoreCount" : 6
}

3.  OCI Command Line Interface (OCI CLI)

This is a good blog article on setting this up, here are the basic steps:

This is the example script:

#!/bin/bash
export OCI_CLI_AUTH=instance_principal
oci db cloud-vm-cluster update --cloud-vm-cluster-id ocid1.cloudvmcluster.oc1.phx.abyhqljrchvzy3scg4i5c4vykzrjszncnramcftjwymxkiaznis5xxxxxxxx --cpu-core-count 6  --auth instance_principal

You will need to modify the vm cluster ocid.   This script changes the vm cluster to 6 OCPUs(cores).

4.   dbaascli

On ExaCC only, you can change the OCPUs(cores) with dbaascli.   Login into one of the compute nodes within the VM Cluster you would like to modify and issue the command below.   Here is the documentation.

dbaascli cpuScale update --coreCount 3

This will change each node within the cluster to 3 OCPUs(cores).


Tuesday, May 31, 2022

Migrating SQL Server to Oracle Cloud database offline

This blog covers the basic steps in migrating a SQL Server 2016 database to Autonomous Database 19c via SQL Developer migration tool offline


What I used:
  • SQL Server 2016 running on a Windows server in OCI as the source in online mode
  • Autonomous Database on Shared infrastructure as the target in offline mode
  • SQL Developer 21.4.3 running on the same server as SQL Server
  • SQL Developer documentation

Basic steps all within SQL Developer:
  • create connection to the source (SQL Server 2016) 
  • Setup migration repository,  I used a different schema within the target Autonomous Database
  • capture source information online
  • review convert/translate options for meta info, procedures, triggers, views, ...
  • select offline mode for DDL and data
  • run the job
  • after job is completed run the DDL, extract data from source and load the data to the target
Almost always you will use offline in which SQL Developer creates the scripts which you will use to move a production database.  Online is nice for testing.   

After creating connections within SQL Developer to the source and target as well as setting up the migration repository select Tools->Migration->Migrate.   


Step 1 -- Overview of migration steps

Step 2 -- Select migration repository
Step 3 -- Create project, note that this is where the scripts and
data will be stored so make sure you have enough storage
as well as where to place the files perhaps on a NFS mount
that both could access.

Step 4:  Select Source in online mode

Step 5:  Select Database(s)

Step 6: Review data type convert options

Step 7:  Select Objects to migrate, default is all

Step 8:  Select offline to move DDL

Step 9:  Select data to be moved offline

Step 10: Summary and Start job to migrate

After the job is completed successfully you will want to run the scripts to create the DDL and copy the data to the target database.  

In the project directory you will find the DDL and data scripts

Run the master.sql script on the target which creates the DDL


These are the scripts to extract and load the data

command parameters are IP address of source database, 
userid and password to connect to source.  This command
exports the data from source database to files


command parameters are connect string , user and password
to the target database.   This commands loads the data from
the files to the target database.


Migrating SQL Server to Oracle Cloud database online

This blog covers the basic steps in migrating a SQL Server 2016 database to Autonomous Database 19c via SQL Developer online migration tool. 


What I used:
  • SQL Server 2016 running on a Windows server in OCI as the source
  • Autonomous Database on Shared infrastructure as the target
  • SQL Developer 21.4.3 running on the same server as SQL Server
  • SQL Developer documentation

Basic steps all within SQL Developer:
  • create connections to the source (SQL Server 2016) and target (Autonomous Database)
  • Setup migration repository,  I used  a different schema within the target Autonomous Database
  • capture source information
  • review convert/translate options for meta info, procedures, triggers, views, ...
  • create target
  • submit the job to create the DDL and move the data
Almost always you will use offline in which SQL Developer creates the scripts which you will use to move a production database.  Online is nice for testing.   

After creating connections within SQL Developer to the source and target as well as setting up the migration repository select Tools->Migration->Migrate.   

Step 1 -- Info on steps



Step 2 -- Select repository

Step 3 -- Create project



Step 4 -- Select source




Step 5 -- Select database(s) 

Step 6 -- Convert data types

Step 7 -- Select objects to be converted

Step 8 -- Select target 

Step 9 -- Select move data online

Step 10 -- Start job to migrate


After job is completed using SQL Developer to view the migrated objects and data.  Image below shows the source and new target. 
















Tuesday, May 24, 2022

Connecting Oracle Analytics Cloud to an Oracle cloud database with a private endpoint

This blog is one example on how to connect OAC to an Oracle Cloud database with a private endpoint  using OAC's private access channel. The basic steps are:

  1. Use Oracle Cloud Infrastructure Console to configure a private access channel within your OAC instance.
  2. From your OAC instance create a data connection to the private database.
Refer to the documentation for all of the detail regarding setting this up, in particular the prerequisites. 

I used the following OCI Services:
  • Oracle Analytics Cloud (OAC) with the Enterprise Analytics feature.
  • Exadata Cloud Service (ExaCS) with an 19c pluggable database. 

Step 1 -- Creating private access channel

Using the OCI Console access your OAC instance, navigate to the Resources section, click Private Access Channel, and then click Configure Private Access Channel.


The private access channel above and the ExaCS were in the same private subnet which makes it easier.  Private access channel requires four IP addresses, two IP addresses are required for network traffic egress, one IP address for the private access channel, and one reserved for future use. 

After about 30 minutes you can click on the created private access channel and note the ip addresses it is using within the subnet. 

Step 2 -- Creating private access channel


OAC can't access private data sources on an Oracle Database that uses a Single Client Access Name (SCAN). If you want to connect Oracle Analytics Cloud to an Oracle Database that uses a SCAN, use one of the following methods to set up the connection in Oracle Analytics Cloudconnect directly to the Oracle Database nodes, instead of SCAN or configure an Oracle Connection Manager in front of SCAN.

From your OAC Home Page, click Create, and then click Connection,  and Oracle Database.

When using ExaCS the default connection strings all use SCAN.  Below are three examples connecting to a PDB on ExaCS that uses the compute node names instead. 


Basic connection to a single compute node

Advanced with long connection string
(DESCRIPTION=(ENABLE=BROKEN)
(ADDRESS_LIST= (LOAD_BALANCE=on)(FAILOVER=ON)
(ADDRESS=(PROTOCOL=tcp)(HOST=exacsphx1-8abcd1.sub1111112222.exacsvcnphx1.oraclevcn.com)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=exacsphx1-8abcd2.sub111111222.exacsvcnphx1.oraclevcn.com)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=GFSwing_SCALE100.paas.oracle.com)))

Advanced with easy connection string
exacsphx1-8abcd2.sub111111222.exacsvcnphx1.oraclevcn.com:1521/GFSwing_SCALE100.paas.oracle.com


Thanks.