Feed aggregator

IAM In OCI – User, Groups, Compartment, Policy, Tags, Federation & MFA

Online Apps DBA - 6 hours 33 min ago

IAM is one of the 5 pillars on which OCI is built. IAM is used for Authentication & Authorization purposes for various resources used in the OCI environment. It is important that you must learn about IAM, To know more about IAM and Compartment in detail, check out this video from Oracle ACE & Cloud […]

The post IAM In OCI – User, Groups, Compartment, Policy, Tags, Federation & MFA appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

What is a serverless database?

Yann Neuhaus - Fri, 2020-06-05 15:44
By Franck Pachot

.
After reading the https://cloudwars.co/oracle/oracle-deal-8×8-larry-ellison-picks-amazons-pocket-again/ paper, I am writing some thoughts about how a database can be serverless and elastic. Of course, a database needs a server to process its data. Serverless doesn’t mean that there are no servers.

Serverless as not waiting for server provisioning

The first idea of “serverless” is about provisioning. In the past when a developer required a new database to start a new project she had to wait that a server is installed. In 1996 my first development on Oracle Database started like this: we asked Sun for a server and OS and asked Oracle for the database software, all for free for a few months, in order to start our prototype. Today this would be a Cloud Free Tier access. At that time we had to wait to receive, unbox, and install all this. I learned a lot there about Installing an OS, configuring the network, setting up disk mirroring… This was an awesome experience for a junior starting in IT. Interestingly, I think that today a junior can learn the same concepts with a Cloud Foundation training and certification. This has not really changed except the unboxing and cabling. The big difference is that today we do not have to wait weeks for it and can setup the same infrastructure in 10 minutes.

That was my first DevOps experience: we wanted to develop our application without waiting for the IT department. But it was not serverless at all.

A few years later I was starting a new datawarehouse for a mobile telco in Africa. Again, weeks to months were required to order and install a server for it. And we didn’t wait. We started the first version of the datawarehouse on a spare PC we had. This was maybe my first serverless experience: the server provisioning is out of the critical path in the project planning. Of course, a PC is not a server and reliability and performance were not there. But we were lucky and when the server arrived we already had good feedback from this first version.

We need serverless, but we need real servers behind it. Today, this is possible: you don’t need to wait and you can provision a new database in the public or private cloud, or simply on a VM, without waiting. And all security, reliability and performance are there. With Oracle, it is a bit more difficult if you can’t do it in their public cloud because licensing do not count vCPUs and you often need specific hardware for it like in the old days. Appliances like ODA can help. Public Cloud or Cloud@Customer definitely helps.

Serverless as not taking responsibility for server administration

Serverless is not only about running on virtual servers with easy provisioning. If you are serverless, you don’t want to manage those virtual machines. You start and connect to a compute instance. You define its shape (CPU, RAM) but you don’t want to know where it runs physically. Of course, you want to define the region for legal, performance or cost reasons, but not which data center, which rack,… That’s the second step of serverless: you don’t manage the physical servers. In Oracle Cloud, you run a Compute Instance where you can install a database. In AWS this is an EC2 instance where you can install a database.

But, even if you don’t own the responsibility of the servers, this is not yet “serverless”. Because you pay for them. If your CFO still sees a bill for compute instance, you are not serverless.

Serverless as not paying for the server

AWS has a true serverless and elastic database offer: Amazon Aurora Serverless. You don’t have to start or stop the servers. This is done automatically when you connect. More activity adds more servers. No connection stops it. And you pay only for what the application is using. You don’t pay for the database servers running. You really pay for what the application is using.

Azure has also a Serverless SQL Server: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-serverless

Those are, as far as I know, the only true serverless databases yet. If we need to stop and start the compute services ourselves, even with some level of auto-scaling, we can call that on-demand but not serverless.

All AWS RDS services including Aurora can be started and stopped on demand. They can scale up or down with minimal downtime, especially in Multi-AZ because the standby can be scaled and activated. Redshift cannot be stopped because it uses local storage. But you can take a snapshot and terminate the instance, and restore it later.

On Oracle side, the Autonomous Database can be stopped and started. Then again, we can say that we don’t pay when we don’t use the database but cannot say that we don’t pay when we don’t use the application. Because the database is up even if the application is not used. However, you can scale without the need to stop and start. And there’s also some level of auto-scaling where the additional application usage is really billed on CPU usage metrics: you pay for n OCPUs when the ATP or ADB is up and you can use up to n*3 sessions on CPU, with true serverless billing for what is above the provisioned OCPUs. Maybe the future will go further. The technology allows it: multitenant allows PDB level CPU caging where the capacity can be changed online (setting CPU_COUNT) and AWR gathers the CPU load with many metrics that can be used for billing.

Serverless

The name is funny because serverless programs run on servers. And the crush for running without servers is paradoxical. When I started programming, it was on very small computers (ZX-81, Apple //e, IBM PC-XT) and I was really proud when I started to do real stuff running on real servers, with a schema on *the* company database. Actually, what is called serverless today is, in my opinion, showing the full power of servers: don’t need to buy a computer for a project but use some mutualized compute power.

The cloud wars use strange marketing terms, but really good technology and concepts are coming.

Cet article What is a serverless database? est apparu en premier sur Blog dbi services.

DB-Upgrade hangs in SE2 waiting on Streams AQ while gathering statistics on LOGMNR-Tables

Yann Neuhaus - Fri, 2020-06-05 14:51

A couple of weeks ago I upgraded an Oracle Standard Edition 2 test database from 12.1.0.2 to 12.2.0.1 (with the April 2020 Patch Bundle) on Windows. Recently I upgraded the production database. Both upgrades were done with the Database Upgrade Assistant DBUA. I didn’t use AUTOUPGRADE because I had to upgrade only 1 database and the DBUA handles everything for me (including changing the necessary Windows services and update the timezone file).

Both upgrades did hang at the finalizing phase of the components upgrade.

So I checked what the upgrade process is waiting for in the DB:


SQL> select sid, sql_id, event,p1,p2,p3 from v$session 
   2 where status='ACTIVE' and type='USER' and sid not in 
   3 (select sid from v$mystat);

       SID SQL_ID        EVENT                                                 P1         P2         P3
---------- ------------- -------------------------------------------------- ----- ---------- ----------
      1142 fgus25bx1md8q Streams AQ: waiting for messages in the queue      17409 1.4072E+14 2147483647

SQL> set long 400000 longchunksize 200
SQL> select sql_fulltext from v$sqlarea where sql_id='fgus25bx1md8q';

SQL_FULLTEXT
---------------------------------------------------------------------------------
DECLARE
        cursor table_name_cursor  is
                select  x.name table_name
                from sys.x$krvxdta x
                where bitand(x.flags, 12) != 0;
        filter_lst DBMS_STATS.OBJECTTAB := DBMS_STATS.OBJECTTAB();
        obj_lst    DBMS_STATS.OBJECTTAB := DBMS_STATS.OBJECTTAB();
        ind number := 1;
BEGIN
   for rec in table_name_cursor loop
      begin
        filter_lst.extend(1);
        filter_lst(ind).ownname := 'SYSTEM';
        filter_lst(ind).objname := 'LOGMNR_'|| rec.table_name||'';
        ind := ind + 1;
      end;
   end loop;
   DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME=>'SYSTEM', objlist=>obj_lst, obj_filter_list=>filter_lst);
END;

So obviously the upgrade process tried to gather stats on LOGMNR-tables owned by SYSTEM and waits for messages in the scheduler queue SCHEDULER$_EVENT_QUEUE (Object ID 17409). I.e. this is something similar as documented in MOS Note 1559487.1.

The upgrade was stuck at this point. So what to do?

Fortunately I remembered a blog about DBUA being restartable in 12.2. from Mike Dietrich:

Restarting a failed Database Upgrade with DBUA 12.2

So I killed the waiting session:


SQL> select serial# from v$session where sid=1142;

   SERIAL#
----------
     59722

SQL> alter system kill session '1142,59722';

System altered.

Then I let the DBUA run into tons of errors and let it finish his work. To restart it I just clicked on “Retry” in the GUI. After some time DBUA went into an error again. I quickly checked the log-files and clicked again on “Retry”. That time it went through without issues. Checking the log-files and the result of the upgrade showed all components migrated correctly.

So in summary: A failed upgrade (crashed or hanging) with DBUA is not such a bad thing anymore as it was before 12.2. You can just let DBUA (or AUTOUPGRADE) retry its work. Of course, usually you have to fix the reason for the failure before restarting/retrying.

REMARK: See Mike Dietrich’s Blog about resumability and restartability of Autoupgrade here:

Troubleshooting, Restoring and Restarting AutoUpgrade

Cet article DB-Upgrade hangs in SE2 waiting on Streams AQ while gathering statistics on LOGMNR-Tables est apparu en premier sur Blog dbi services.

ORA-01830 in sqlldr

Tom Kyte - Fri, 2020-06-05 14:46
After upgrading from 11g to 12.2.0.1 Oracle, application folks claim the sqlldr of certain table(s) fail now with ORA-01830 LOG FILE) pg MARA.log <code> SQL*Loader: Release 12.2.0.1.0 - Production on Tue May 26 14:48:06 2020 Copyri...
Categories: DBA Blogs

Lineage OS: Building my own system from scratch...

Dietrich Schroff - Fri, 2020-06-05 13:48
For my Samsung J5 is was only able to find a lineage 15.1 image but no lineage 16 or even 17.
So i decided to build the image on my own.

I followed this guide:
https://github.com/lineage-doge/device-j5y17lte/tree/lineage-16.0

But on this page the roomservice.xml is wrong:

 

Just change it to








The rest was following the instructions:
mkdir -p android/lineage-16.0
cd android/lineage-16.0
repo init -u git://github.com/LineageOS/android.git -b lineage-16.0
repo sync

Create .repo/local_manifests/roomservice.xml with the content shown above.
Then run "repo sync" once again.

Most important point it to extract the binaries from your device. To get a background just read this:
https://wiki.lineageos.org/extracting_blobs_from_zips.html


I extracted /system from my Android device via TWRP (under advanced you can find a file explorer and this can copy the partition to your sd card).
After that point STOCK_ROM_DIR to your sd card and run:
cd device/samsung/j5y17lte
STOCK_ROM_DIR=/path/to/system ./extract-files.sh
Now we are ready to rumble:
cd ../../
source build/envsetup.sh
brunch lineage_j5y17lte-userdebug
But after some minutes:
LD      kernel/built-in.o
/media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/kernel/samsung/exynos7870/drivers/net/wireless/Makefile:62: android version 1: 90000
/media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/kernel/samsung/exynos7870/drivers/net/wireless/Makefile:63: MAJOR_VERSION version 1: 9
/media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/kernel/samsung/exynos7870/drivers/net/wireless/Makefile:107: platform version 3 : 9.0.0
/media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/kernel/samsung/exynos7870/drivers/net/wireless/Makefile:108: MAJOR_VERSION version 3: 9
  LD      drivers/built-in.o
make[1]: Leaving directory '/media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/out/target/product/j5y17lte/obj/KERNEL_OBJ'
Makefile:145: recipe for target 'sub-make' failed
make: *** [sub-make] Error 2
make: Leaving directory '/media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/kernel/samsung/exynos7870'
[  1% 555/35382] target StaticLib: libv8src_32 (/media/schroff/895337b7-c4eb-44a4-...arget/product/j5y17lte/obj_arm/STATIC_LIBRARIES/libv8src_intermediates/libv8src.a)
ninja: build stopped: subcommand failed.
19:21:20 ninja failed with: exit status 1

#### failed to build some targets (05:29 (mm:ss)) ####
But this error is easy to fix:
cd ~/android/lineage-16.0/out/target/product/j5y17lte/obj/KERNEL_OBJ/firmware/tsp_imagis
cp ~/android/lineage-16.0/kernel/samsung/exynos7870/firmware/tsp_imagis/ist3038h_a3x_cmcs.bin .
~/android/lineage-16.0/kernel/samsung/exynos7870/firmware/tsp_imagis/ist3038h_a3x.fw .
schroff@zerberus:~/android/lineage-16.0/out/target/product/j5y17lte/obj/KERNEL_OBJ/firmware/tsp_imagis$

and try once again!

And after a while:
  Imgdiff Stats Report 
========================

Compressing system.new.dat with brotli
  running:  brotli --quality=6 --output=/tmp/tmpAOkKFk/system.new.dat.br /tmp/tmpAOkKFk/system.new.dat
using prebuilt boot.img from BOOTABLE_IMAGES...
  running:  java -Xmx2048m -Djava.library.path=/media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/out/host/linux-x86/lib64 -jar /media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/out/host/linux-x86/framework/signapk.jar -w build/target/product/security/testkey.x509.pem build/target/product/security/testkey.pk8 /tmp/tmptUxdfx.zip /tmp/tmpSgxnBy.zip
  running:  zip -d /tmp/tmpSgxnBy.zip META-INF/com/android/metadata
  running:  java -Xmx2048m -Djava.library.path=/media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/out/host/linux-x86/lib64 -jar /media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/out/host/linux-x86/framework/signapk.jar -w build/target/product/security/testkey.x509.pem build/target/product/security/testkey.pk8 /tmp/tmpSgxnBy.zip /media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/out/target/product/j5y17lte/lineage_j5y17lte-ota-eng.schroff.zip
done.
[100% 453/453] build bacon
Package Complete: /media/schroff/895337b7-c4eb-44a4-9063-00f629e5149f/lineage-16.0/out/target/product/j5y17lte/lineage-16.0-20200502-UNOFFICIAL-j5y17lte.zip

#### build completed successfully (27:31 (mm:ss)) ####


Analytic cost error

Jonathan Lewis - Fri, 2020-06-05 07:06

Here’s a surprising costing error that was raised on the Oracle Developer Forum a few days ago. There’s a glitch in the cost atributed to sorting when an analytic over() clause – with corresponding “window sort” operation – makes a “sort order by” operation redundant. Here’s a script to generate the data set I’ll use for a demonstration with a template for a few queries I’ll be running against the data.


rem
rem     Script:         window_sort_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table t1
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                    id,
        cast(lpad(rownum,30,'0') as varchar2(30)) vc30,
        cast(lpad(rownum,65,'0') as varchar2(65)) vc65,
        lpad('x',100,'x')                         padding
from
        generator
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

select
--      row_number() over (order by vc65) rn1,
--      row_number() over (order by vc30) rn2,
--      vc30,
--      vc65,
        id
from
        t1
-- order by
--      vc65
--      vc30
/


I’m (optionally) using the row_number() analytic function over the entire data set and for each row_number() I include in the select list Oracle will have to sort the data; I’ve also got an (optional) order by on the two columns that appear in the row_number() functions and that may introduce some sorting as well. Here, for example, are a few examples of the queries I might run:


prompt  ===========================================
prompt  Select vc30, order by vc30
prompt  ===========================================

explain plan for
select 
        vc30,
        id 
from 
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

prompt  ==========================================
prompt  Select row_number over vc30 - no ordering
prompt  ==========================================

explain plan for
select 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

prompt  ===========================================
prompt  Select row_number over vc30 - order by vc65
prompt  ===========================================

explain plan for
select 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc65
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

In the first query we select and sort vc30 so (approximately) we sort 10,000 rows x 30(-ish) bytes for 300K of sort space. In the second query we generate the row_number() based on sorting vc30 – the size of the output is much smaller (it’s only 10,000 numbers between 1 and 10,000) but to generate those numbers we do have to select and sort vc30, so the workload (predicted and actual) will probably be similar to that of the firsrt query. In the final query we have to select and sort vc30 to generate the row_number() but we also have to select (without reporting) and sort vc65 in order to report the results in the right order – so we should expect the workload to be roughly 3 times the size (approximately 10,000 * (30 + 65) bytes). Here, from 12.2.0.1, are the execution plans (with a little cosmetic tidying):

===========================================
Select vc30, order by vc30
===========================================
------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   341K|       |   130   (5)|
|   1 |  SORT ORDER BY     |      | 10000 |   341K|   448K|   130   (5)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   341K|       |    42   (5)|
------------------------------------------------------------------------

==========================================
Select row_number over vc30 - no ordering
==========================================
------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   341K|       |   130   (5)|
|   1 |  WINDOW SORT       |      | 10000 |   341K|   448K|   130   (5)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   341K|       |    42   (5)|
------------------------------------------------------------------------

===========================================
Select row_number over vc30 - order by vc65
===========================================
-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   463   (3)|
|   1 |  SORT ORDER BY      |      | 10000 |   986K|  1120K|   463   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   463   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

As expected, the execution plans are roughly consistent with the estimates I gave for volume of data – the agreement between the query with order by vc30 and the query with over(order by vc30), and the increased load of ordering by vc65 when selecting the row_number(over vc30) is good.

So let’s see what the plan looks like when we select row_number(over vc30) and then order by vc30. If the optimizer is smart it will recognise that it’s possible to adopt a strategy that allows it to take advantage of the sorting from the over() clause to avoid a separate sort order by:


explain plan for
select 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

----------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)|
----------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   341K|    42   (5)|
|   1 |  WINDOW SORT       |      | 10000 |   341K|    42   (5)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   341K|    42   (5)|
----------------------------------------------------------------

The plan shows us that Oracle has used the 10gR2 “order by elimination” feature to bypass the need for a “sort order by” operation because it knows the data will be arriving in the right order from the “Window Sort” operation. Ynfortunately it also shows us that Oracle has lost the cost of doing the Window Sort!

Let’s try another experiment – let’s generate two different row_number() columns, with and without ordering:


prompt  =====================================
prompt  Select Both row_numbers - no ordering
prompt  =====================================

explain plan for
select 
        row_number() over (order by vc65) rn1, 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes projection'));

prompt  ======================================
prompt  Select Both row_numbers order by vc30
prompt  ======================================

explain plan for
select 
        row_number() over (order by vc65) rn1, 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes projection'));

prompt  ============================================
prompt  Select Both row_numbers order by vc65
prompt  ============================================

explain plan for
select 
        row_number() over (order by vc65) rn1, 
        row_number() over (order by vc30) rn2, 
        id 
from 
        t1
order by
        vc65
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes projection'));

You’ll notice that I’ve included a request for the projection information in the plans for these examples so that you can see what columns are passed up from each operation to its parent. Again, though, we’ll start by focusing on just the costs:


-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   463   (3)|
|   1 |  WINDOW SORT        |      | 10000 |   986K|  1120K|   463   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   463   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65],
       "ID"[NUMBER,22], ROW_NUMBER() OVER ( ORDER BY "VC65")[22], ROW_NUMBER()
       OVER ( ORDER BY "VC30")[22]
   2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22],
       "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]
   3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]


-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   253   (3)|
|   1 |  WINDOW SORT        |      | 10000 |   986K|  1120K|   253   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   253   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65],
       "ID"[NUMBER,22], ROW_NUMBER() OVER ( ORDER BY "VC65")[22], ROW_NUMBER()
       OVER ( ORDER BY "VC30")[22]
   2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22],
       "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]
   3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]


-------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 10000 |   986K|       |   253   (3)|
|   1 |  WINDOW SORT        |      | 10000 |   986K|  1120K|   253   (3)|
|   2 |   WINDOW SORT       |      | 10000 |   986K|  1120K|   253   (3)|
|   3 |    TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
-------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=1) "VC65"[VARCHAR2,65], "VC30"[VARCHAR2,30],
       "ID"[NUMBER,22], ROW_NUMBER() OVER ( ORDER BY "VC30")[22], ROW_NUMBER()
       OVER ( ORDER BY "VC65")[22]
   2 - (#keys=1) "VC30"[VARCHAR2,30], "ID"[NUMBER,22],
       "VC65"[VARCHAR2,65], ROW_NUMBER() OVER ( ORDER BY "VC30")[22]
   3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]

The first query – without an order by” clause – reports a cost of 463; add an order by clause and the cost drops to 253 (and the “order by” clause doesn’t appear as a sort order by operation in the plan). The cost differential between the ordered and “unordered” plans , by the way, is 210 (and from there down to the base tablescan is another 211) – and here’s another way to see that number (+/- 1) appearing:


explain plan for
select
        vc65,
        id
from
        t1
order by
        vc30
/

select * from table(dbms_xplan.display(null,null,'basic cost rows bytes'));

------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   986K|       |   253   (3)|
|   1 |  SORT ORDER BY     |      | 10000 |   986K|  1120K|   253   (3)|
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   986K|       |    42   (5)|
------------------------------------------------------------------------

The cost of the query with no “order by” clause is basically the cost of a table scan plus two sorts of (vc65, vc30, plus a few bits). When you add in an “order by” clause the optimizer discards the “order by” clause and then subtracts one of the sort costs as well.

CBO trace file

Every time I say something about 10053 (CBO) trace files I feel compelled to remind everyone that I rarely look at them, and then it’s usually because I think I know there’s a bug and where I’ll find it in the trace. That’s exactly the case here.

I’m expecting to see two differences in the trace files between the “no order” query, and a query where I’ve added in an “order by” clause. One difference is that one trace file will have an “OBYE” (eliminate order by) comment which won’t be in the other trace, one trace file will have a “cost for SORT” calculation which won’t be in the other.

So here are the relevant bits – first from the query without the order by clause:


OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE:     OBYE bypassed: no order by to eliminate.

...

GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  T1[T1]#0
WiF sort
    SORT ressource         Sort statistics
      Sort width:         497 Area size:      435200 Max Area size:    87240704
      Degree:               1
      Blocks to Sort: 150 Row size:     122 Total Rows:          10000
      Initial runs:   2 Merge passes:  1 IO Cost / pass:         56
      Total IO sort cost: 206.000000      Total CPU sort cost: 12180621
      Total Temp space used: 1147000
    SORT ressource         Sort statistics
      Sort width:         497 Area size:      435200 Max Area size:    87240704
      Degree:               1
      Blocks to Sort: 150 Row size:     122 Total Rows:          10000
      Initial runs:   2 Merge passes:  1 IO Cost / pass:         56
      Total IO sort cost: 206.000000      Total CPU sort cost: 12180621
      Total Temp space used: 1147000
***********************
Best so far:  Table#: 0  cost: 463.384707  card: 10000.000000  bytes: 1010000.000000
***********************

And from one of the plans with an order by:


OBYE:   Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE:     OBYE performed.

...

GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]:  T1[T1]#0
WiF sort
    SORT ressource         Sort statistics
      Sort width:         497 Area size:      435200 Max Area size:    87240704
      Degree:               1
      Blocks to Sort: 150 Row size:     122 Total Rows:          10000
      Initial runs:   2 Merge passes:  1 IO Cost / pass:         56
      Total IO sort cost: 206.000000      Total CPU sort cost: 12180621
      Total Temp space used: 1147000
***********************
Best so far:  Table#: 0  cost: 252.512458  card: 10000.000000  bytes: 1010000.000000
***********************

As you can see, the first (no order) trace file has two sort calculations under WiF sort, (Window Function?) while the second (order by) trace file reports “OBYE performed” and then loses one of its WiF sorts.

Note: If I had ordered by ID I would have seen two calculations of cost under the heading of WiF sort and a third calculation below that with the heading ORDER BY sort. Unfortunately when I ran the test to check this I also found that the OBYE report said: “OBYE performed” when it wasn’t relevant and there wasn’t an “order by” available for elimination.

Interestingly I tested to see if I could change the optimizer’s behaviour by adding the no_eliminate_oby(@sel$1) hint to the “order by” queries but the GENERAL PLANS section didn’t change, even though the trace file report: “OBYE: OBYE bypassed: hinted”, and the “Hint Report” from the Oracle 19i execution plan acknowledge the hint as legal and used.

Summary

If you have an “order by” clause in a query block that includes analytic functions and the optimizer decides that it can eliminate the “order by” and rely on the side effect of an analytic over() clause you may find that the cost of the query block is reduced by the cost of one of the Window Sort operations. (Technically this might lead to cases where the optimizer then made some poor choices in overall shape of the execution plan – though such cases might be very rare given that this costing error doesn’t affect the cardinality estimates.)

Lagniappe

In the last set of tests I added in the projection information as a simple example of a case where it can help you understand a little more of what the plan is supposed to achieve.  If you examine the last two query plans carefully (select both row_number() values and order by vc30 / vc65 respectively) Operation 2 of the first plan reports:

2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22], "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]

while operation 2 of the second plan reports:

2 - (#keys=1) "VC30"[VARCHAR2,30], "ID"[NUMBER,22], "VC65"[VARCHAR2,65], ROW_NUMBER() OVER ( ORDER BY "VC30")[22]

It’s not until you look at these two lines that the plans show any differences – operations 1 and 2 simply say “Window Sort” without giving any clue about which window sort is for which over() clause. The projection information, though, tells you which way around the over() clauses operated – in the first query the over(order by vc65) is applied to the result of the tablescan first, while in the second query it’s the over(order by vc30) that is applied first.

Lagniappe 2

There’s another little oddity you might spot when you look at the projection information and think about the WiF sort costs from the unordered query. The columns passed from operation 3 to operation 2 are:

 3 - "ID"[NUMBER,22], "VC30"[VARCHAR2,30], "VC65"[VARCHAR2,65]

The columns passed from operation 2 to operation 1 are one of:

2 - (#keys=1) "VC65"[VARCHAR2,65], "ID"[NUMBER,22], "VC30"[VARCHAR2,30], ROW_NUMBER() OVER ( ORDER BY "VC65")[22]

2 - (#keys=1) "VC30"[VARCHAR2,30], "ID"[NUMBER,22], "VC65"[VARCHAR2,65], ROW_NUMBER() OVER ( ORDER BY "VC30")[22]

Operation 2 sorts the data from operation 3, and operation 1 sorts the data from operation 2 – but the columns arriving from operation 2 have an extra row_number()[22] added to them. So when you look in the trace file at the two Wif Sort calculations why do they both show:

Blocks to Sort: 150 Row size: 122 Total Rows: 10000

Shouldn’t one of them show a Row size that’s (at least) 22 longer than the other ?

 

 

 

 

 

 

CASB In Oracle Cloud (OCI)

Online Apps DBA - Fri, 2020-06-05 06:56

Oracle provides service called CASB in OCI for Security monitoring of services like VCN, Compute instances, DB System, etc. CASB provides features like visibility, threat protection, data protection, and much more! To know more about CASB, check our blog post at https://k21academy.com/1z099721 that covers: • Overview Of Oracle CASB • Key Features Of CASB • […]

The post CASB In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Step-By-Step Deploy Azure Virtual Machines In Availability Set

Online Apps DBA - Fri, 2020-06-05 04:14

When we create two Virtual Machines (VMs) across the Fault Domain in the Availability Set, it will increase the availability of your Applications to 99.95%. On the other hand, the Availability Zones allow you to deploy your VMs into different data centers within the same region, so you get 99.99% uptime. So, what exactly is […]

The post Step-By-Step Deploy Azure Virtual Machines In Availability Set appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Installing a UI for Tanzu Application Service for Kubernetes

Pas Apicella - Thu, 2020-06-04 23:18
Having installed Tanzu Application Service for Kubernetes a few times having a UI is something I must have. In this post I show how to get Stratos deployed and running on Tanzu Application Service for Kubernetes (TAS4K8s) beta 0.2.0.

Steps

Note: It's assumed you have TAS4K8s deployed and running as per the output of "kapp" 

$ kapp list
Target cluster 'https://lemons.run.haas-236.pez.pivotal.io:8443' (nodes: a51852ac-e449-40ad-bde7-1beb18340854, 5+)

Apps in namespace 'default'

Name  Namespaces                                    Lcs   Lca
cf    (cluster),build-service,cf-blobstore,cf-db,   true  2h
      cf-system,cf-workloads,cf-workloads-staging,
      istio-system,kpack,metacontroller

Lcs: Last Change Successful
Lca: Last Change Age

1 apps

Succeeded

1. First let's create a namespace to install Stratos into.

$ kubectl create namespace console
namespace/console created

2. Using helm 3 install Stratos as shown below.

$ helm install my-console --namespace=console stratos/console --set console.service.type=LoadBalancer
NAME: my-console
LAST DEPLOYED: Fri Jun  5 13:18:22 2020
NAMESPACE: console
STATUS: deployed
REVISION: 1
TEST SUITE: None

3. You can verify it installed correctly a few ways as shown below

- Check using "helm ls -A"
$ helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-console console 1 2020-06-05 13:18:22.785689 +1000 AEST deployed console-3.2.1 3.2.1
- Check everything in the namespace "console" is up and running
$ kubectl get all -n console
NAME READY STATUS RESTARTS AGE
pod/stratos-0 2/2 Running 0 34m
pod/stratos-config-init-1-mxqbw 0/1 Completed 0 34m
pod/stratos-db-7fc9b7b6b7-sp4lf 1/1 Running 0 34m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-console-mariadb ClusterIP 10.100.200.65 <none> 3306/TCP 34m
service/my-console-ui-ext LoadBalancer 10.100.200.216 10.195.75.164 443:32286/TCP 34m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/stratos-db 1/1 1 1 34m

NAME DESIRED CURRENT READY AGE
replicaset.apps/stratos-db-7fc9b7b6b7 1 1 1 34m

NAME READY AGE
statefulset.apps/stratos 1/1 34m

NAME COMPLETIONS DURATION AGE
job.batch/stratos-config-init-1 1/1 28s 34m
4. To invoke the UI run a script as follows.

Script:

export IP=`kubectl -n console get service my-console-ui-ext -ojsonpath='{.status.loadBalancer.ingress[0].ip}'`

echo ""
echo "Stratos URL: https://$IP:443"
echo ""

Output:

$ ./get-stratos-url.sh

Stratos URL: https://10.195.75.164:443

5. Invoking the URL above will take you to a screen as follows where you would select "Local Admin" account



6. Set a password and click "Finish" button


7. At this point we need to get an API endpoint for our TAS4K8s install. Easiest way to get that is to run a command as follows when logged in using the CF CLI as follows

$ cf api
api endpoint:   https://api.system.run.haas-236.pez.pivotal.io
api version:    2.150.0

8. Click on the "Register an Endpoint" + button as shown below


9. Select "Cloud Foundry" as the type you wish to register.

10. Enter details as shown below and click on "Register" button.
 


11. At this point you should connect to Cloud Foundry using your admin credentials for the TAS4K8s instance as shown below.


12. Once connected your good to go and start deploying some applications. 




Categories: Fusion Middleware

AWS – RDS – SQL Server Native backups

Yann Neuhaus - Thu, 2020-06-04 17:22
Introduction

RDS provide automatic backup feature. This feature will backup the entire RDS instance. As a DBA sometimes you need to backup individual database, they are many reason for that (legal, migration, replication,…). Unfortunately individual database backups are not available within the RDS instance.

This post explains how you can enable the native database backup on RDS , as you are used to with an on-premises SQL Server instance.
To summarize we will create a S3 bucket on AWS to store the backups, create IAM role having the mandatory permission on the S3 bucket, create an RDS Option Group associated with the role and having the SQLSERVER_BACKUP_RESTORE option.

Of course you need to have a existing RDS instance running. I have one with a SQL Server 2017 EE.

Create a S3 bucket

If you do not already have one, create first a S3 bucket that will be your repository for the saving your database backups.
Open your s3 management console and click [Create bucket]

Enter a S3 bucket name and select the region where your RDS instance is located.
When done click [Create]

Create a IAM role

Open the AWS IAM management console and select [Policies} in the navigation pane and click [Create policy]

Select the Json tab and copy the following script to replace the existing one

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "VisualEditor0",
			"Effect": "Allow",
			"Action": [
			"s3:ListBucket",
			"s3:GetBucketLocation"
			],
			"Resource": "arn:aws:s3:::dbi-sql-backup"
		},
		{
			"Sid": "VisualEditor1",
			"Effect": "Allow",
			"Action": [
			"s3:PutObject",
			"s3:GetObject",
			"s3:AbortMultipartUpload",
			"s3:ListMultipartUploadParts"
			],
			"Resource": "arn:aws:s3:::dbi-sql-backup/*"
		}
	]
}

In the script do not forget to replace the S3 bucket name with the one you created previously and click [Review policy]

Set a name for your policy and finish the creation with [Create policy]

Now, select [Roles] in the navigation pane. Click the option [Create role]

Select the [AWS service] option, then the [EC2] and finally again [EC2] in the use case list and click [Next:Permissions]

Search the policy you create and select it. Then click the [Next:Tags]. I recommend to capture some tags, to later be able to identify it easier. Click then [Next:Review]

Key in your role name and finally press [Create Role]

When created edit again the role by selecting it in the role list

Select the [Trusted relationship] tab and edit it.

Replace the line “Service”: “ec2.amazonaws.com” with “Service”: “rds.amazonaws.com” as we want this role to be active in RDS.
Click [Update Trust Policy]

Create an Option Group

Next step is to create an option group.
To do so, open the RDS management console and select [Option groups] in the navigation pane and click on [Create group]

Set the name of your option group and select the engine version and Major Engine Version of your RDS instance and create your option group.

Now select your option group in the list and add an option.

In the option detail section select SQLSERVER_BACKUP_RESTORE and in the IAM section select the role you created previously. In the scheduling option choose the option you want. In my case I want it to be applied immediately. Then click [Add option].

Link your RDS instance with your option group

The last configuration step is to couple your RDS instance with the create option group

In the RDS management console select [Databases] in the navigation pane and select your RDS instance for which you need native backup to be activated. Make sure a well that the version of the RDS instance match the one set in the option group you created previously. Check if you instance is available and then click [Modify]

Scroll down until the Database options section. In the option group combo box select the option group your created previously and then click [Continue] at the bottom of the page.

Choose when you want to apply the modification. Be aware that if you select “Apply immediately” your RDS instance will restart and you will have an interruption of the service.

Test the Backup

Connect to you RDS SQL Server instance using for exemple Microsoft SQL Sever Management Studio.
There is a stored procedure in the [msdb] named [dbo].[rds_backup_database] that you must use to start your native database backup

USE [msdb]

EXECUTE [dbo].[rds_backup_database]
@source_db_name = 'ProductDB'
,@s3_arn_to_backup_to = 'arn:aws:s3:::dbi-sql-backup/awsvdata_ProductDB.bak'
--,@kms_master_key_arn
--,@overwrite_s3_backup_file
--,@type
--,@number_of_files
GO

 

Adapt the script with your database name and the path of your S3 bucket with the backupfile name. Note that I did not used all parameters of the stored procedure in this post.
The result of the stored procedure execution will give you a task_id associated with your command.

With the task_if, you can follow up the status of the process with the following stored procedure:

Use [msdb]
execute rds_task_status @task_id = 4

Conclusion

Enabling Native database backup is indeed very practical. Unfortunately there are some limitations.

For instance there is no differential, transaction log or filegroup backup or restore possibility that could be very useful in many scenario.

Cet article AWS – RDS – SQL Server Native backups est apparu en premier sur Blog dbi services.

Oracle 18c – select from a flat file

Yann Neuhaus - Thu, 2020-06-04 16:09
By Franck Pachot

.
This post is the first one from a series of small examples on recent Oracle features. My goal is to present them to people outside of Oracle and relational databases usage, maybe some NoSQL players. And this is why the title is “select from a flat-file” rather than “Inline External Tables”. In my opinion, the names of the features of Oracle Database are invented by the architects and developers, sometimes renamed by Marketing or CTO, and all that is very far from what the users are looking for. In order to understand “Inline External Table” you need to know all the history behind: there were tables, then external tables, and there were queries, and inlined queries, and… But imagine a junior who just wants to query a file, he will never find this feature. He has a file, it is not a table, it is not external, and it is not inline. What is external to him is this SQL language and what we want to show him is that this language can query his file.

I’m running this in the Oracle 20c preview in the Oracle Cloud.

In this post, my goal is to load a small fact and dimension table for the next posts about some recent features that are interesting in data warehouses. It is the occasion to show that with Oracle we can easily select from a .csv file, without the need to run SQL*Loader or create an external table.
I’m running everything from SQLcl and then I use the host command to call curl:


host curl -L http://opendata.ecdc.europa.eu/covid19/casedistribution/csv/ | dos2unix | sort -r > /tmp/covid-19.csv

This gets the latest number of COVID-19 cases per day and per country.

It looks like this:


SQL> host head  /tmp/covid-19.csv
dateRep,day,month,year,cases,deaths,countriesAndTerritories,geoId,countryterritoryCode,popData2018,continentExp
31/12/2019,31,12,2019,27,0,China,CN,CHN,1392730000,Asia
31/12/2019,31,12,2019,0,0,Vietnam,VN,VNM,95540395,Asia
31/12/2019,31,12,2019,0,0,United_States_of_America,US,USA,327167434,America
31/12/2019,31,12,2019,0,0,United_Kingdom,UK,GBR,66488991,Europe
31/12/2019,31,12,2019,0,0,United_Arab_Emirates,AE,ARE,9630959,Asia
31/12/2019,31,12,2019,0,0,Thailand,TH,THA,69428524,Asia
31/12/2019,31,12,2019,0,0,Taiwan,TW,TWN,23780452,Asia
31/12/2019,31,12,2019,0,0,Switzerland,CH,CHE,8516543,Europe
31/12/2019,31,12,2019,0,0,Sweden,SE,SWE,10183175,Europe

I sorted them on date on purpose (next posts may talk about data clustering).

I need a directory object to access the file:


SQL> create or replace directory "/tmp" as '/tmp';

Directory created.

You don’t have to use quoted identifiers if you don’t like it. I find it convenient here.

I can directly select from the file, the EXTERNAL clause mentioning what we had to put in an external table before 18c:


SQL> select *
   from external (
    (
     dateRep                    varchar2(10)
     ,day                       number
     ,month                     number
     ,year                      number
     ,cases                     number
     ,deaths                    number
     ,countriesAndTerritories   varchar2(60)
     ,geoId                     varchar2(30)
     ,countryterritoryCode      varchar2(3)
     ,popData2018               number
     ,continentExp              varchar2(30)
    )
    default directory "/tmp"
    access parameters (
     records delimited by newline skip 1 -- skip header
     logfile 'covid-19.log'
     badfile 'covid-19.bad'
     fields terminated by "," optionally enclosed by '"'
    )
    location ('covid-19.csv')
    reject limit 0
   )
 .

SQL> /
      DATEREP    DAY    MONTH    YEAR    CASES    DEATHS                       COUNTRIESANDTERRITORIES       GEOID    COUNTRYTERRITORYCODE    POPDATA2018    CONTINENTEXP
_____________ ______ ________ _______ ________ _________ _____________________________________________ ___________ _______________________ ______________ _______________
01/01/2020         1        1    2020        0         0 Algeria                                       DZ          DZA                           42228429 Africa
01/01/2020         1        1    2020        0         0 Armenia                                       AM          ARM                            2951776 Europe
01/01/2020         1        1    2020        0         0 Australia                                     AU          AUS                           24992369 Oceania
01/01/2020         1        1    2020        0         0 Austria                                       AT          AUT                            8847037 Europe
01/01/2020         1        1    2020        0         0 Azerbaijan                                    AZ          AZE                            9942334 Europe
01/01/2020         1        1    2020        0         0 Bahrain                                       BH          BHR                            1569439 Asia
ORA-01013: user requested cancel of current operation

SQL>

I cancelled it as that’s too long to display here.

As the query is still in the buffer, I just add a CREATE TABLE in front of it:


SQL> 1
  1* select *
SQL> c/select/create table covid as select/
   create table covid as select *
  2   from external (
  3    (
  4     dateRep                    varchar2(10)
  5     ,day                       number
...

SQL> /

Table created.

SQL>

While I’m there I’ll quickly create a fact table and a dimension hierarchy:


SQL> create table continents as select rownum continent_id, continentexp continent_name from (select distinct continentexp from covid where continentexp!='Other');

Table created.

SQL> create table countries as select country_id,country_code,country_name,continent_id from (select distinct geoid country_id,countryterritorycode country_code,countriesandterritories country_name,continentexp continent_name from covid where continentexp!='Other') left join continents using(continent_name);

Table created.

SQL> create table cases as select daterep, geoid country_id,cases from covid where continentexp!='Other';

Table created.

SQL> alter table continents add primary key (continent_id);

Table altered.

SQL> alter table countries add foreign key (continent_id) references continents;

Table altered.

SQL> alter table countries add primary key (country_id);

Table altered.

SQL> alter table cases add foreign key (country_id) references countries;

Table altered.

SQL> alter table cases add primary key (country_id,daterep);

Table altered.

SQL>

This create a CASES fact table with only one measure (covid-19 cases) and two dimensions. To get it simple, the date dimension here is just a date column (you usually have a foreign key to a calendar dimension). The geographical dimension is a foreign key to the COUNTRIES table which itself has a foreign key referencing the CONTINENTS table.

12c Top-N queries

In 12c we have a nice syntax for Top-N queries with the FETCH FIRST clause of the ORDER BY:


SQL> select continent_name,country_code,max(cases) from cases join countries using(country_id) join continents using(continent_id) group by continent_name,country_code order by max(cases) desc fetch first 10 rows only;

CONTINENT_NAME                 COU MAX(CASES)
------------------------------ --- ----------
America                        USA      48529
America                        BRA      33274
Europe                         RUS      17898
Asia                           CHN      15141
America                        ECU      11536
Asia                           IND       9304
Europe                         ESP       9181
America                        PER       8875
Europe                         GBR       8719
Europe                         FRA       7578

10 rows selected.

This returns the 10 countries which had the maximum covid-19 cases per day.

20c WINDOW clauses

If I want to show the date with the maximum value, I can use analytic functions and in 20c I don’t have to repeat the window several times:


SQL> select continent_name,country_code,top_date,top_cases from (
  2   select continent_name,country_code,daterep,cases
  3    ,first_value(daterep)over(w) top_date
  4    ,first_value(cases)over(w) top_cases
  5    ,row_number()over(w) r
  6    from cases join countries using(country_id) join continents using(continent_id)
  7    window w as (partition by continent_id order by cases desc)
  8   )
  9   where r=1 -- this to get the rows with the highes value only
 10   order by top_cases desc fetch first 10 rows only;

CONTINENT_NAME                 COU TOP_DATE    TOP_CASES
------------------------------ --- ---------- ----------
America                        USA 26/04/2020      48529
Europe                         RUS 02/06/2020      17898
Asia                           CHN 13/02/2020      15141
Africa                         ZAF 30/05/2020       1837
Oceania                        AUS 23/03/2020        611

The same can be done before 20c but you have to write the (partition by continent_id order by cases desc) for each projection.

In the next post I’ll show a very nice feature. Keeping the 3 tables normalized data model but, because storage is cheap, materializing some pre-computed joins. If you are a fan of NoSQL because “storage is cheap” and “joins are expensive”, then you will see what we can do with SQL in this area…

Cet article Oracle 18c – select from a flat file est apparu en premier sur Blog dbi services.

Oracle 12c – pre-built join index

Yann Neuhaus - Thu, 2020-06-04 16:09
By Franck Pachot

.
This post is part of a series of small examples of recent features. I’m running this in the Oracle 20c preview in the Oracle Cloud. I have created a few tables in the previous post with a mini-snowflake scheme: a fact table CASES with the covid-19 cases per country and day. And a dimension hierarchy for the country with COUNTRIES and CONTINENTS tables.

This title may look strange for people used to Oracle. I am showing the REFRESH FAST ON STATEMENT Materialized View clause here, also known as “Synchronous Refresh for Materialized Views”. This name makes sense only when you already know materialized views, complete and fast refreshes, on commit and on-demand refreshes… But that’s not what people will look for. Indexes are also refreshed by the statements, synchronously. Imagine that they were called “Synchronous Refresh for B*Trees”, do you think they would have been so popular?

A materialized view, like an index, is a redundant structure where data is stored in a different physical layout in order to be optimal for alternative queries. For example, you ingest data per date (which is the case in my covid-19 table – each day a new row with the covid-19 cases per country). But if I want to query all points for a specific country, those are scattered though the physical segment that is behind the table (or the partition). With an index on the country_code, I can identify easily one country, because the index is sorted on the country. I may need to go to the table to get the rows, and that is expensive, but I can avoid it by adding all the attributes in the index. With Oracle, as with many databases, we can build covering indexes, for real index-only access, even if they don’t mention those names.

But with my snowflake schema, I’ll not have the country_code in the fact table and I have to join to a dimension. This is more expensive because the index on the country_name will get the country_id and then I have to go to an index on the fact table to get the rows for this country_id. When it comes to joins, I cannot index the result of the join (I’m skipping bitmap join indexes here because I’m talking about covering indexes). What I would like is an index with values from multiple tables.

A materialized view can achieve much more than an index. We can build the result of the join in one table. And no need for event sourcing or streaming here to keep it up to date. No need to denormalize and risk inconsistency. When NoSQL pioneers tell you that storage is cheap and redundancy is the way to scale, just keep your relational database for integrity and build materialized views on top. When they tell you that joins are expensive, just materialize them upfront. Before 12c, keeping those materialized views consistent with the source required either:

  1. materialized view logs which is similar to event sourcing except that ON COMMIT refresh is strongly consistent
  2. partition change tracking which is ok for bulk changes, when scaling big data

This is different from indexes which are maintained immediately: when you update the row, the index is synchronized because your session has the values and the rowid and can go directly to update the index entry.

refresh fast on statement

In 12c you have the benefit from both: index-like fast maintenance with rowid access, and the MView possibility of querying pre-build joins. Here is an example on the tables created in the previous post.


SQL> create materialized view flatview refresh fast on statement as
  2  select daterep,continent_name,country_name,cases from cases join countries using(country_id) join continents using(continent_id) where cases>0;

select daterep,continent_name,country_name,cases from cases join countries using(country_id) join continents using(continent_id) where cases>0
                                                                                                                                             *
ERROR at line 2:
ORA-12015: cannot create a fast refresh materialized view from a complex query

There are some limitations when we want fast refresh and we have a utility to help us understand what we have to change or add in our select clause.

explain_mview

I need to create the table where the messages will be written to by this utility:


@ ?/rdbms/admin/utlxmv

SQL> set sqlformat ansiconsole
SQL> set pagesize 10000

This has created mv_capabilities_table and I can run dbms_mview.explain_mview() now.

Here is the call, with the select part of the materialized view:


SQL> exec dbms_mview.explain_mview('-
  2  select daterep,continent_name,country_name,cases from cases join countries using(country_id) join continents using(continent_id) where cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;

   ?                  CAPABILITY_NAME    RELATED_TEXT                                                                 MSGTXT
____ ________________________________ _______________ ______________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT                        inline view or subquery in FROM list not supported for this type MV
N    REFRESH_FAST_AFTER_INSERT                        inline view or subquery in FROM list not supported for this type MV
N    REFRESH_FAST_AFTER_INSERT                        view or subquery in from list
N    REFRESH_FAST_AFTER_ONETAB_DML                    see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                       see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                 PCT FAST REFRESH is not possible if query contains an inline view

SQL> rollback;

Rollback complete.

“inline view or subquery in FROM list not supported for this type MV” is actually very misleading. I use ANSI joins and they are translated to query blocks and this is not supported.

No ANSI joins

I rewrite it with the old join syntax:


SQL> exec dbms_mview.explain_mview('-
  2  select daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;
   ?                  CAPABILITY_NAME       RELATED_TEXT                                                                      MSGTXT
____ ________________________________ __________________ ___________________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT        CONTINENTS         the SELECT list does not have the rowids of all the detail tables
N    REFRESH_FAST_AFTER_INSERT        DEMO.CASES         the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.COUNTRIES     the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.CONTINENTS    the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_ONETAB_DML                       see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                          see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                    PCT is not possible on any of the detail tables in the materialized view

SQL> rollback;

Rollback complete.

Now I need to add the ROWID of the table CONTINENTS in the materialized view.

ROWID for all tables

Yes, as I mentioned, the gap between indexes and materialized views is shorter. The REFRESH FAST ON STATEMENT requires access by rowid to update the materialized view, like when a statement updates an index.


SQL> exec dbms_mview.explain_mview('-
  2  select continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;
   ?                  CAPABILITY_NAME       RELATED_TEXT                                                                      MSGTXT
____ ________________________________ __________________ ___________________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT        COUNTRIES          the SELECT list does not have the rowids of all the detail tables
N    REFRESH_FAST_AFTER_INSERT        DEMO.CASES         the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.COUNTRIES     the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.CONTINENTS    the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_ONETAB_DML                       see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                          see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                    PCT is not possible on any of the detail tables in the materialized view
SQL> rollback;

Rollback complete.

Now, the ROWID for COUNTRIES.

I continue the and finally I’ve added ROWID for all tables involved:


SQL> exec dbms_mview.explain_mview('-
  2  select continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;
   ?                  CAPABILITY_NAME       RELATED_TEXT                                                                      MSGTXT
____ ________________________________ __________________ ___________________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT        COUNTRIES          the SELECT list does not have the rowids of all the detail tables
N    REFRESH_FAST_AFTER_INSERT        DEMO.CASES         the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.COUNTRIES     the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.CONTINENTS    the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_ONETAB_DML                       see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                          see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                    PCT is not possible on any of the detail tables in the materialized view

SQL> rollback;

Rollback complete.

SQL> exec dbms_mview.explain_mview('-
  2  select cases.rowid case_rowid,countries.rowid country_rowid,continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0-
  3  ');

PL/SQL procedure successfully completed.

SQL> select possible "?",capability_name,related_text,msgtxt from mv_capabilities_table where capability_name like 'REFRESH_FAST%' order by seq;
   ?                  CAPABILITY_NAME       RELATED_TEXT                                                                      MSGTXT
____ ________________________________ __________________ ___________________________________________________________________________
N    REFRESH_FAST
N    REFRESH_FAST_AFTER_INSERT        DEMO.CASES         the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.COUNTRIES     the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_INSERT        DEMO.CONTINENTS    the detail table does not have a materialized view log
N    REFRESH_FAST_AFTER_ONETAB_DML                       see the reason why REFRESH_FAST_AFTER_INSERT is disabled
N    REFRESH_FAST_AFTER_ANY_DML                          see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled
N    REFRESH_FAST_PCT                                    PCT is not possible on any of the detail tables in the materialized view

SQL> rollback;

Rollback complete.

Ok, now another message: “the detail table does not have a materialized view log”. But that’s exactly the purpose of statement-level refresh: being able to fast refresh without creating and maintaining materialized view logs, and without full-refreshing a table or a partition.

This’t the limit of DBMS_MVIEW.EXPLAIN_MVIEW. Let’s try to create the materialized view now:


SQL> create materialized view flatview refresh fast on statement as
  2  select cases.rowid case_rowid,countries.rowid country_rowid,continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0;

select cases.rowid case_rowid,countries.rowid country_rowid,continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0
                                                                                                                                                                                                                                                                                    *
ERROR at line 2:
ORA-32428: on-statement materialized join view error: Shape of MV is not
supported(composite PK)

SQL>

That’s clear. I had created the fact primary key on the compound foreign keys.

Surrogate key on fact table

This is not allowed by statement-level refresh, so let’s change that:


SQL> alter table cases add (case_id number);

Table altered.

SQL> update cases set case_id=rownum;

21274 rows updated.

SQL> alter table cases drop primary key;

Table altered.

SQL> alter table cases add primary key(case_id);

Table altered.

SQL> alter table cases add unique(daterep,country_id);
Table altered.

I have added a surrogate key and defined a unique key for the composite one.

Now the creation is sucessful:


SQL> create materialized view flatview refresh fast on statement as
  2  select cases.rowid case_rowid,countries.rowid country_rowid,continents.rowid continent_rowid,daterep,continent_name,country_name,cases from cases , countries , continents where cases.country_id=countries.country_id and countries.continent_id=continents.continent_id and cases>0;

Materialized view created.

Note that I tested later and I am able to create it with the ROWID from the fact table CASES only. But that’s not a good idea: in order to propagate any change to the underlying tables, the materialized view must have the ROWID, like an index. I consider as a bug the possibility to do it.

Here are the columns stored in my materialized view:


SQL> desc flatview

              Name    Null?            Type
__________________ ________ _______________
CASE_ROWID                  ROWID
COUNTRY_ROWID               ROWID
CONTINENT_ROWID             ROWID
DATEREP                     VARCHAR2(10)
CONTINENT_NAME              VARCHAR2(30)
COUNTRY_NAME                VARCHAR2(60)
CASES                       NUMBER

Storing the ROWID is not something we should recommend as some maintenance operations may change the physical location of rows. You will need to complete refresh the materialized view after an online move for example.

No-join query

I’ll show query rewrite in another blog post. For the moment, I’ll query this materialized view directly.

Here is a query similar to the one in the previous post:


SQL> select continent_name,country_name,top_date,top_cases from (
  2   select continent_name,country_name,daterep,cases
  3    ,first_value(daterep)over(partition by continent_name order by cases desc) top_date
  4    ,first_value(cases)over(partition by continent_name order by cases desc)top_cases
  5    ,row_number()over(partition by continent_name order by cases desc) r
  6    from flatview
  7   )
  8   where r=1 order by top_cases
  9  ;

   CONTINENT_NAME                COUNTRY_NAME      TOP_DATE    TOP_CASES
_________________ ___________________________ _____________ ____________
Oceania           Australia                   23/03/2020             611
Africa            South_Africa                30/05/2020            1837
Asia              China                       13/02/2020           15141
Europe            Russia                      02/06/2020           17898
America           United_States_of_America    26/04/2020           48529

I have replaced the country_id and continent_id by their name as I didn’t put them in my materialized view. And I repeated the window function everywhere if you want to run the same in versions lower than 20c.

This materialized view is a table. I can partition it by hash to scatter the data. I can cluster on another column. I can add indexes. I have the full power of a SQL databases on it, without the need to join if you think that joins are slow. If you come from NoSQL you can see it like a DynamoDB global index. You can query it without joining, fetching all attributes with one call, and filtering on another key than the primary key. But here we have always strong consistency: the changes are replicated immediately, fully ACID. They will be committed or rolled back by the same transaction that did the change. They will be replicated synchronously or asynchronously with read-only replicas.

DML on base tables

Let’s do some changes here, lowering the covid-19 cases of CHN to 42%:


SQL> alter session set sql_trace=true;

Session altered.

SQL> update cases set cases=cases*0.42 where country_id=(select country_id from countries where country_code='CHN');

157 rows updated.

SQL> alter session set sql_trace=false;

Session altered.

I have set sql_trace because I want to have a look at the magic behind it.

Now running my query on the materialized view:



SQL> select continent_name,country_name,top_date,top_cases from (
  2   select continent_name,country_name,daterep,cases
  3    ,first_value(daterep)over(partition by continent_name order by cases desc) top_date
  4    ,first_value(cases)over(partition by continent_name order by cases desc)top_cases
  5    ,row_number()over(partition by continent_name order by cases desc) r
  6    from flatview
  7   )
  8*  where r=1 order by top_cases;

   CONTINENT_NAME                COUNTRY_NAME      TOP_DATE    TOP_CASES
_________________ ___________________________ _____________ ____________
Oceania           Australia                   23/03/2020             611
Africa            South_Africa                30/05/2020            1837
Asia              India                       04/06/2020            9304
Europe            Russia                      02/06/2020           17898
America           United_States_of_America    26/04/2020           48529

CHN is not the top one in Asia anymore with the 42% correction.

The changes were immediately propagated to the materialized view like when indexes are updated, and we can see that in the trace:


SQL> column value new_value tracefile
SQL> select value from v$diag_info where name='Default Trace File';
                                                                     VALUE
__________________________________________________________________________
/u01/app/oracle/diag/rdbms/cdb1a_iad154/CDB1A/trace/CDB1A_ora_49139.trc


SQL> column value clear
SQL> host tkprof &tracefile trace.txt

TKPROF: Release 20.0.0.0.0 - Development on Thu Jun 4 15:43:13 2020

Copyright (c) 1982, 2020, Oracle and/or its affiliates.  All rights reserved.

sql_trace instruments all executions with time and number of rows. tkprof aggregates those for analysis.

The trace shows two statements on my materialized view: DELETE and INSERT.

The first one is about removing the modified rows.


DELETE FROM "DEMO"."FLATVIEW"
WHERE
 "CASE_ROWID" = :1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      157      0.00       0.00          0          0          0           0
Execute    157      0.01       0.04         42        314        433         141
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      314      0.01       0.04         42        314        433         141

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 634     (recursive depth: 1)
Number of plan statistics captured: 3

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  DELETE  FLATVIEW (cr=2 pr=1 pw=0 time=395 us starts=1)
         1          1          1   INDEX UNIQUE SCAN I_OS$_FLATVIEW (cr=2 pr=1 pw=0 time=341 us starts=1 cost=1 size=10 card=1)(object id 78628)

This has been done row-by-row but is optimized with an index on ROWID that has been created autonomously with my materialized view.

The second one is inserting the modified rows:


INSERT INTO  "DEMO"."FLATVIEW" SELECT "CASES".ROWID "CASE_ROWID",
  "COUNTRIES".ROWID "COUNTRY_ROWID","CONTINENTS".ROWID "CONTINENT_ROWID",
  "CASES"."DATEREP" "DATEREP","CONTINENTS"."CONTINENT_NAME" "CONTINENT_NAME",
  "COUNTRIES"."COUNTRY_NAME" "COUNTRY_NAME","CASES"."CASES" "CASES" FROM
  "CONTINENTS" "CONTINENTS","COUNTRIES" "COUNTRIES", (SELECT "CASES".ROWID
  "ROWID","CASES"."DATEREP" "DATEREP","CASES"."CASES" "CASES",
  "CASES"."COUNTRY_ID" "COUNTRY_ID" FROM "DEMO"."CASES" "CASES" WHERE
  "CASES".ROWID=(:Z)) "CASES" WHERE "CASES"."COUNTRY_ID"=
  "COUNTRIES"."COUNTRY_ID" AND "COUNTRIES"."CONTINENT_ID"=
  "CONTINENTS"."CONTINENT_ID" AND "CASES"."CASES">0


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      157      0.00       0.01          0          0          0           0
Execute    157      0.01       0.02          0        755        606         141
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      314      0.02       0.03          0        755        606         141

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 634     (recursive depth: 1)
Number of plan statistics captured: 3

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  LOAD TABLE CONVENTIONAL  FLATVIEW (cr=8 pr=0 pw=0 time=227 us starts=1)
         1          1          1   NESTED LOOPS  (cr=5 pr=0 pw=0 time=29 us starts=1 cost=3 size=47 card=1)
         1          1          1    NESTED LOOPS  (cr=3 pr=0 pw=0 time=20 us starts=1 cost=2 size=37 card=1)
         1          1          1     TABLE ACCESS BY USER ROWID CASES (cr=1 pr=0 pw=0 time=11 us starts=1 cost=1 size=17 card=1)
         1          1          1     TABLE ACCESS BY INDEX ROWID COUNTRIES (cr=2 pr=0 pw=0 time=9 us starts=1 cost=1 size=20 card=1)
         1          1          1      INDEX UNIQUE SCAN SYS_C009401 (cr=1 pr=0 pw=0 time=4 us starts=1 cost=0 size=0 card=1)(object id 78620)
         1          1          1    TABLE ACCESS BY INDEX ROWID CONTINENTS (cr=2 pr=0 pw=0 time=5 us starts=1 cost=1 size=10 card=1)
         1          1          1     INDEX UNIQUE SCAN SYS_C009399 (cr=1 pr=0 pw=0 time=2 us starts=1 cost=0 size=0 card=1)(object id 78619)

Again, a row-by-row insert apparently as the “execute count” is nearly the same as the “rows count”. 157 is the number of rows I have updated.

You may think that this is a huge overhead, but those operations are optimized for a long time. The materialized view is refreshed and ready for optimal queries: no need to queue, stream, reorg, vacuum,… And I can imagine that if this feature is used, it will be optimized with bulk operations which would allow compression.

Truncate

This looks all good. But… what happens if I truncate the table?


SQL> truncate table cases;

Table truncated.

SQL> select continent_name,country_name,top_date,top_cases from (
  2   select continent_name,country_name,daterep,cases
  3    ,first_value(daterep)over(partition by continent_name order by cases desc) top_date
  4    ,first_value(cases)over(partition by continent_name order by cases desc)top_cases
  5    ,row_number()over(partition by continent_name order by cases desc) r
  6    from flatview
  7   )
  8*  where r=1 order by top_cases;
   CONTINENT_NAME                COUNTRY_NAME      TOP_DATE    TOP_CASES
_________________ ___________________________ _____________ ____________
Oceania           Australia                   23/03/2020             611
Africa            South_Africa                30/05/2020            1837
Asia              India                       04/06/2020            9304
Europe            Russia                      02/06/2020           17898
America           United_States_of_America    26/04/2020           48529

Nothing changed. This is dangerous. You need to refresh it yourself. This may be a bug. What will happen if you insert data back? Note that, like with triggers, direct-path inserts will be transparently run as conventional inserts.

Joins are not expensive

This feature is really good to pre-build the joins in a composition of tables, as a hierarchical key-value, or snowflake dimension fact table. You can partition, compress, order, filter, index,… as with any relational table. There no risk here with the denormalization as it is transparently maintained when you update the underlying tables.

If you develop on a NoSQL database because you have heard that normalization was invented to reduce storage, which is not nexpensive anymore, that’s a myth (you can read this long thread to understand the origin of this myth). Normalization is about database integrity and separation lof logical and physical layers. And that’s what Oracle Database implements with this feature: you update the logical view, tables are normalized for integrity, and the physical layer transparently maintains additional structures like indexes and materialized views to keep queries under single-digit milliseconds. Today you still need to think about indexes and materialized views to build. Some advisors may help. All those are the bricks for the future: an autonomous database where you define only the logical layer for your application and all those optimisations will be done in background.

Cet article Oracle 12c – pre-built join index est apparu en premier sur Blog dbi services.

Different behavior of execution of procedure from server and plsql developer

Tom Kyte - Thu, 2020-06-04 02:06
Hi, The issue that I am going to explain here is bit weird and strange. Coming straight to the issue; There is a package which consists of 10 procedures. One of them calls rest 9 procedures one by one. In one of the 9 procedure, it throws exceptio...
Categories: DBA Blogs

Database Upgrade (with HSM Encryption) from 11.2.0.4 to 18.0.0 Fails

Tom Kyte - Thu, 2020-06-04 02:06
We are trying to upgrade the database from 11g to 18c. The wallet master key has been stored in HSM and the auto-login enabled in 11g Database. The wallet is opening in 11g Database without any issues. But, it is complaining that "Wallet-is-not-open"...
Categories: DBA Blogs

Returning a named constant in a SQL query

Tom Kyte - Thu, 2020-06-04 02:06
Is it possible to return a named constant in a SQL query? In other words, let's say we want to return the equivalent of: SELECT 'Business rule violation: '||to_char(bsnss_rule) FROM violation_table; But, we would like to pick up 'Business ...
Categories: DBA Blogs

Update On My Oracle Blogging Activity

Andrejus Baranovski - Wed, 2020-06-03 15:23
If you were following me, you probably noticed I stopped active blogging related to Oracle tech. I moved to Medium platform and writing Machine Learning related articles at Towards Data Science. I'm doing this already since late 2018. So, I didn't stop blogging, just the subject is changed. If you are interested in Machine Learning, I will be happy if you follow me on Medium.

Why I stopped blogging about Oracle? There are several reasons:

1. We are building our own product Katana ML
2. Machine Learning is a complex topic and requires lots of focus
3. I decided to dedicate my time to Machine Learning and Open Source

We still keep working in Red Samurai with Oracle technology, but probably you would not see Oracle related articles from me anymore. But then who knows, never say never.

Unity and Difference

Greg Pavlik - Wed, 2020-06-03 09:43
One of the themes that traveled from Greek philosophy through until the unfolding of modernity was the neoplatonic notion of "the One". A simple unity in which all "transcendentals" - beauty, truth, goodness - both originate and in some sense coalesce. In its patristic and medieval development, these transcendentals were "en-hypostasized" or made present in persons - the idea of the Trinity, where a communion of persons exist in perfect love, perfect peace and mutual self-offering: most importantly, a perfect unity in difference. All cultures have their formative myths and this particular myth made its mark on a broad swath of humanity over the centuries - though I think in ways that usually obscured its underlying meaning (unfortunately).

Now I have always identified with this comment of Dostoevsky: "I will tell you that I am a child of this century, a child of disbelief and doubt. I am that today and will remain so until the grave": sometimes more strongly than others. But myths are not about what we believe is "real" at any point in time. The meaning of these symbols I think says something for all of us today - particularly in the United States: that the essence of humanity may be best realized in a unity in difference that can only be realized through self-offering love. In political terms we are all citizens of one country and our obligation as a society is to care for each other. This much ought to be obvious - we cannot exclude one race, one economic class, one geography, one party, from mutual care. The whole point of our systems, in fact, ought to be to realize, however imperfectly, some level of that mutual care, of mutual up-building and mutual support.

That isn't happening today. Too often this we are engaged in the opposite - mutual tearing down and avoiding our responsibilities to each other. I wish there was a magic fix for this: it clearly has been a problem that has plagued our history for a long, long time. The one suggestion I can make is to find a way to reach out across boundaries with care on a day by day basis. It may seem like a person cannot make a difference. No individual drop of rain thinks it is responsible for the flood.

Functions in SQL with the Multitenant Containers Clause

Yann Neuhaus - Wed, 2020-06-03 08:27
By Clemens Bleile

To prepare a presentation about Multitenant Tuning I wanted to see the METHOD_OPT dbms_stats global preference of all my pluggable DBs. In this specific case I had 3 PBDs called pdb1, pdb2 and pdb3 in my CDB. For testing purposes I changed the global preference in pdb1 from its default ‘FOR ALL COLUMNS SIZE AUTO’ to ‘FOR ALL INDEXED COLUMNS SIZE AUTO’:

c##cbleile_adm@orclcdb@PDB1> exec dbms_stats.set_global_prefs('METHOD_OPT','FOR ALL INDEXED COLUMNS SIZE AUTO');
c##cbleile_adm@orclcdb@PDB1> select dbms_stats.get_prefs('METHOD_OPT') from dual;

DBMS_STATS.GET_PREFS('METHOD_OPT')
------------------------------------
FOR ALL INDEXED COLUMNS SIZE AUTO

Afterwards I ran my SQL with the containers clause from the root container:


c##cbleile_adm@orclcdb@CDB$ROOT> select con_id, dbms_stats.get_prefs('METHOD_OPT') method_opt from containers(dual);

    CON_ID METHOD_OPT
---------- --------------------------------
         1 FOR ALL COLUMNS SIZE AUTO
         3 FOR ALL COLUMNS SIZE AUTO
         4 FOR ALL COLUMNS SIZE AUTO
         5 FOR ALL COLUMNS SIZE AUTO

4 rows selected.

For CON_ID 3 I expected to see “FOR ALL INDEXED COLUMNS SIZE AUTO”. What is wrong here?

I actually got it to work with the following query:


c##cbleile_adm@orclcdb@CDB$ROOT> select con_id, method_opt from containers(select dbms_stats.get_prefs('METHOD_OPT') method_opt from dual);

    CON_ID METHOD_OPT
---------- ----------------------------------
         1 FOR ALL COLUMNS SIZE AUTO
         4 FOR ALL COLUMNS SIZE AUTO
         5 FOR ALL COLUMNS SIZE AUTO
         3 FOR ALL INDEXED COLUMNS SIZE AUTO

4 rows selected.

That is interesting. First of all I didn’t know that you can actually use SELECT-statements in the containers clause (according the syntax diagram it has to be a table or a view-name only) and secondly the function dbms_stats.get_prefs in the first example has obviously been called in the root container after getting the data.

I verified that last statement with a simple test by creating a function in all containers, which just returns the container id of the current container:


create or replace function c##cbleile_adm.show_con_id return number
as
conid number;
begin
     select to_number(sys_context('USERENV', 'CON_ID')) into conid from sys.dual;
     return conid;
  end;
/

And then the test:


c##cbleile_adm@orclcdb@CDB$ROOT> select con_id, show_con_id from containers(dual);

    CON_ID SHOW_CON_ID
---------- -----------
         1           1
         3           1
         4           1
         5           1

4 rows selected.

c##cbleile_adm@orclcdb@CDB$ROOT> select con_id, show_con_id from containers(select show_con_id from dual);

    CON_ID SHOW_CON_ID
---------- -----------
         4           4
         1           1
         3           3
         5           5

4 rows selected.

That proved that the function in the select-list of the first statement is actually called in the root container after getting the data from the PDBs.

Summary:
– be careful when running the containers-clause in a select-statement with a function in the select-list. You may get unexpected results.
– the syntax with a select-statement in the containers clause is interesting.

REMARK: Above tests have been performed with Oracle 19.6.

Cet article Functions in SQL with the Multitenant Containers Clause est apparu en premier sur Blog dbi services.

As Doors Reopen, Oracle Helps HR Teams Define What’s Next

Oracle Press Releases - Wed, 2020-06-03 08:00
Blog
As Doors Reopen, Oracle Helps HR Teams Define What’s Next

By Gretchen Alarcon, group vice president, Oracle HCM—Jun 3, 2020

Oracle Workforce Health and Safety

The COVID-19 pandemic has forced HR professionals into some of the most challenging times on record. Whether its adapting to new workforce demands, managing dispersed teams or maintaining positive employee experiences in a time of volatility, HR teams have taken center stage.

But the battle is not over yet. As the road to recovery continues, organizations are now having to figure out how to safely and efficiently bring their employees back into the workplace. And it’s up to HR teams to help make that happen. We at Oracle are here to help.

A few months ago, we announced free access to our Oracle Workforce Health & Safety tool for our HR customers to help keep their employees safe, and we’ve already seen it deliver tremendous value with many customers taking action. But we don’t want to stop there.

To provide additional help for the HR community, we are introducing the Oracle Employee Care Package for the New Workplace: a bundle of HCM tools and technology that will help HR teams make work safer, supportive, smarter, and more human as we re-enter the workplace. Within the Employee Care Package, we address three of the key areas for HR teams as they navigate the return to the work: safety, support, and skills.  

Prioritizing Workforce Safety

As offices begin to reopen, one of the top priorities for HR teams will without a doubt be ensuring the health and safety of their workforce. Which is why we’re including our Oracle Onboarding and Oracle Workforce Health & Safety tools in the Employee Care Package.

  • Oracle Workforce Health & Safety allows HR professionals to track and report health incidents within their teams (such as confirmed COVID-19 cases), minimize exposure to the larger workforce, and take appropriate action quickly.

  • And Oracle Onboarding ensures that employees who are coming back to the workplace are re-onboarded effectively, aligned with compliance, and set up for success and productivity.

Delivering Employee Support

HR teams will also need to be ready and equipped to deliver a supportive, and positive experience for their teams. Employees will likely be confused, curious, or even a little nervous when returning back to the workplace, and it’s up to HR teams to be their support and deliver confidence. Our Oracle HR Help Desk, Oracle Digital Assistant, and Oracle Work Life tools can help. 

  • With Oracle HR Help Desk, organizations can deliver an optimal employee experience by seamlessly managing COVID-19 issues, promptly resolving employee inquiries, and identifying trending issues among the workforce to resolve quickly.

  • Oracle Digital Assistant makes it easy for employees to get quick answers to commonly asked questions, whether they’re at home or at work, and through whatever channel they prefer (SMS, voice, social, etc.)

  • HR teams can use Oracle Work Life to strengthen their workplace community and empower employees to grow stronger together. This team camaraderie will improve employee productivity and well-being.

Zeroing in on Skills

As we enter into this “new normal,” we will see a heightened importance around employee learning and reskilling as new skills gaps emerge and more roles are ready to be filled.

  • With Oracle Learning, HR teams can help employees reskill and readjust to the new workplace. Recommended trainings will help employees improve productivity, acquire new skills, and discover new opportunities.

Now more than ever, HR teams are in the spotlight and driving their organizations to define what’s next. With the right tools in place, HR teams can help their workforce readjust to this changing world. And we’re here to help every step of the way.

For more information about Oracle’s Employee Care Package, click here: http://oracle.com/employee-care-package

Fetch First Update

Jonathan Lewis - Wed, 2020-06-03 07:48

A question about mixing the (relatively new) “fetch first” syntax with “select for update” appeared a few days ago on the Oracle Developer Forum. The requirement was for a query something like:


select
        *
from
        t1
order by
        n1
fetch
        first 10 rows only
for     update
;

The problem with this query is that it results in Oracle raising error ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc. The error doesn’t seem to be particularly relevant, of course, until you remember that “fetch first” creates an inline view using the analytic row_number() under the covers.

One suggested solution was to use PL/SQL to open a cursor with a pure select then use a loop to lock each row in turn. This would need a little defensive programming, of course, since each individual “select for update” would be running at a different SCN from the driving loop, and there would be some risk of concurrency problems (locking, or competing data change) occuring.

There is a pure – thought contorted – SQL solution though where we take the driving SQL and put it into a subquery that generates the rowids of the rows we want to lock, as follows:


select
        /*+
                qb_name(main)
        */
        *
from
        t1
where
        t1.rowid in (
                select
                        /*+ qb_name(inline) unnest no_merge */
                        t1a.rowid
                from
                        t1 t1a
                order by
                        t1a.n1
                fetch 
                        first 10 rows only
        )
for update
;

The execution plan for this query is critical – so once you can get it working it would be a good idea to create a baseline (or SQL Patch) and attach it to the query. It is most important that the execution plan should be the equivalent of the following:


select  /*+   qb_name(main)  */  * from  t1 where  t1.rowid in (
select    /*+ qb_name(inline) unnest no_merge */    t1a.rowid   from
t1 t1a   order by    t1a.n1   fetch    first 10 rows only  ) for update

Plan hash value: 1286935441

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |      |      1 |        |     10 |00:00:00.01 |     190 |       |       |          |
|   1 |  FOR UPDATE                   |      |      1 |        |     10 |00:00:00.01 |     190 |       |       |          |
|   2 |   BUFFER SORT                 |      |      2 |        |     20 |00:00:00.01 |     178 |  2048 |  2048 | 2048  (0)|
|   3 |    NESTED LOOPS               |      |      1 |     10 |     10 |00:00:00.01 |     178 |       |       |          |
|*  4 |     VIEW                      |      |      1 |     10 |     10 |00:00:00.01 |     177 |       |       |          |
|*  5 |      WINDOW SORT PUSHED RANK  |      |      1 |  10000 |     10 |00:00:00.01 |     177 |  2048 |  2048 | 2048  (0)|
|   6 |       TABLE ACCESS FULL       | T1   |      1 |  10000 |  10000 |00:00:00.01 |     177 |       |       |          |
|   7 |     TABLE ACCESS BY USER ROWID| T1   |     10 |      1 |     10 |00:00:00.01 |       1 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("from$_subquery$_003"."rowlimit_$$_rownumber"<=10)
   5 - filter(ROW_NUMBER() OVER ( ORDER BY "T1A"."N1")<=10)

Critically you need the VIEW operation to be the driving query of a nested loop join that does the “table access by user rowid” joinback. In my case the query has used a full tablescan to identify the small number of rowids needed – in a production system that would be the part of the statement that should first be optimised.

It’s an unfortunate feature of this query structure (made messier by the internal rewrite for the analytic function) that it’s not easy to generate a correct set of hints to force the plan until after you’ve already managed to get the plan. Here’s the outline information that shows the messiness of the hints I would have needed:


Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"INLINE")
      OUTLINE_LEAF(@"SEL$A3F38ADC")
      UNNEST(@"SEL$1")
      OUTLINE(@"INLINE")
      OUTLINE(@"MAIN")
      OUTLINE(@"SEL$1")
      NO_ACCESS(@"SEL$A3F38ADC" "from$_subquery$_003"@"SEL$1")
      ROWID(@"SEL$A3F38ADC" "T1"@"MAIN")
      LEADING(@"SEL$A3F38ADC" "from$_subquery$_003"@"SEL$1" "T1"@"MAIN")
      USE_NL(@"SEL$A3F38ADC" "T1"@"MAIN")
      FULL(@"INLINE" "T1A"@"INLINE")
      END_OUTLINE_DATA
  */

You’ll notice that my /*+ unnest */ hint is now modified – for inclusion at the start of the query – to /*+ unnest(@sel1) */ rather than the /*+ unnest(@inline) */ that you might have expected. That’s the side effect of the optimizer doing the “fetch first” rewrite before applying “missing” query block names. If I wanted to write a full hint set into the query itself (leaving the qb_name() hints in place but removing the unnest and merge I had originally) I would need the following:


/*+
        unnest(@sel$1)
        leading(@sel$a3f38adc from$_subquery$_003@sel$1 t1@main)
        use_nl( @sel$a3f38adc t1@main)
        rowid(  @sel$a3f38adc t1@main)
*/

I did make a bit of a fuss about the execution plan. I think it’s probably very important that everyone who runs this query gets exactly the same plan and the plan should be this nested loop. Although there’s a BUFFER SORT at operation 2 that is probably ensuring that every would get the same data in the same order regardless of the execution plan before locking any of it, I would be a little worried that different plans might somehow be allowed to lock the data in a different order, thus allowing for deadlocks.

Pages

Subscribe to Oracle FAQ aggregator