Oracle Training In Chennai provides Real-time and Practical Trainings exclusively on Oracle with 100% JOB Gauranteed Course. ORACLE Training institute in Chennai has been providing the best possible ORACLE Course in Chennai on both Classroom Trainings and Corporate Trainings for more than Seven years. Our ORACLE training courses include SQL, PL/SQL, DBA, RAC DBA, APPS DBA, ORACLE Live Projects, Performance Tuning, OCA and OCP Certification. At ORACLE Training Institute, we provide Complete PRACTICAL & REALTIME Trainings with ORACLE Certification Guidance, Case Studies and Projects exclusively on ORACLE Technologies.
FREE Demo Session:
Try two FREE CLASS to see for yourself the quality of training.
Learn Oracle from the best ORACLE Training Institute in Chennai which offers complete ORACLE course in Chennai by well experienced ORACLE Certified Trainer having more than 12+ years of Real time Development experience using ORACLE Technologies. We are proud to mention that we are the Best institutes to provide ORACLE COMPLETE PRACTICAL REALTIME TRAINING and hence, closely working with various companies in India. We are, today, one of the few best training companies providing complete practical trainings on ORACLE Technologies.
As a Trainer, Dinesh has over 12+ years of experience in ORACLE Implementation and is very keen in delivering the BEST ORACLE
Trainings. He is a professional who has been well known for his expertise on Oracle. Dinesh work as an Oracle Consultant & Instructor,
He is also been as Senior Instructor of Oracle University and provided 120 Corporate trainings, trained 1500 corporate & fresher employees.
He is an Oracle Certified Master (OCM). Expertised with RAC, Data Guard, ASM, Oracle Exadata, RMAN, Oracle Performance Tuning, Streams, Security & more.
He is among few of the Oracle Certified Master (OCM's) in the World to achieve below certifications in his area of research.
- Oracle Certified Professional (OCP) 9i
- Oracle Certified Professional (OCP) 10g
- Oracle Certified Professional (OCP) 11g
- Oracle 10g Certified RAC Expert
- Oracle 10g Certified Master (OCM)
- Oracle 11g Exadata Certified Implementation Specialist
Greens Technology is preferred vendor for many corporates for their Oracle training needs and lots of credit goes to Dinesh as he has personally taken more than 120 corporate batches. He works closely with the project managers and their team to understand their project specific requirements and customized the training program for them. He also trained many corporate employees in upgrading them to latest versions and advanced features of Oracle.
We are committed to provide Complete PRACTICAL Training and ORACLE Certification Training exclusively on SQL, PL/SQL, DBA, RAC DBA, APPS DBA, ORACLE Live Projects, Performance Tuning, OCA and OCP Certification Study Material and Practice Material included with the course.
Oracle Parallel Query (OPQ) FAQ. Oracle Parallel Query was previously called Parallel Query Option.
Oracle Parallel Query (formerly Oracle Parallel Query Option or PQO) allows one to break-up a given SQL statement so that its parts can run simultaneously on different processors in a multi-processor machine. Typical operations that can run in parallel are: full table scans, sorts, sub-queries, data loading etc.
Parallel Query can improve performance of certain types of operations dramatically and is commonly used in Decision Support and Data Warehousing applications.
Parallel Query allows one to break a SELECT or DML statements into multiple smaller chunks and have PQ slaves execute those smaller chunks on separate CPU's in a single box.
Oracle Real Application Clusters (RAC) allows one to take advantage of a multi-node clustered environment for availability and performance reasons. It is commonly used to access a very large database from different nodes of a cluster. More information about Oracle RAC can be obtained from the RAC FAQ.
If both RAC and OPQ are available one can split operations across multiple CPUs and multiple nodes in a cluster for even further performance improvements.
After setting the INIT.ORA parameters necessary for Parallel Query to work, do the following:
ALTER TABLE table_name PARALLEL (DEGREE 8);
SELECT --+ PARALLEL(table_alias, degree, nodes) * FROM table_name ...
SELECT /*+PARALLEL(table_alias, degree, nodes)*/ * FROM table_name ...
Per table, execute the following DDL commands:
ALTER TABLE table_name PARALLEL (DEGREE 1 INSTANCES 1);
ALTER TABLE table_name NOPARALLEL;
ALTER INDEX index_name PARALLEL (DEGREE 1 INSTANCES 1);
ALTER INDEX IND_XXX NOPARALLEL;
You can also remove the INIT.ORA parameters that allow Parallel Query to work.
The following INIT.ORA initialization parameters can be set to control Parallel Query execution:
Use the following Oracle data dictionary views to monitor parallel queries:
Prior to Oracle 8i:
SELECT * FROM sys.v$pq_sysstat;
Oracle 8i and above:
SELECT * from v$pq_sysstat; SELECT * FROM v$px_process; SELECT * FROM v$px_sesstat; SELECT * FROM v$px_process_sysstat;
The more CPU's you have the bigger the benefit you can expect to get from using Parallel Query. Nevertheless, even a single-CPU system can benefit from using it. The reason for this is that the system can continue processing a second thread while waiting for I/O or network activity in another thread.
Before enabling it on a single CPU system, ensure the system is not too busy and start with a low value for PARALLEL_MAX_SERVERS (i.e. 2 or 4).
If not carefully watched over, Parallel Query can completely saturate a machine. It is not difficult to bring some of the most powerful machines to their knees. One needs to carefully balance the number of people executing Parallel Query Operations and the degree of parallelism with the number of CPUs in the system.
Also note that executing things in parallel is not always faster. In fact, you need to test all queries to ensure that they will benefit from the increased parallelism before permanently enabling it.
This article presents a simple method for partitioning an existing table using the EXCHANGE PARTITION
syntax. The contents of the article should not be used as an indication of when and how to partition objects, it simply shows the method of getting from A to B. Remember, in many cases incorrect partitioning is worse than no partitioning!
First we create a sample table as our starting point.
-- Create and populate a small lookup table. CREATE TABLE lookup ( id NUMBER(10), description VARCHAR2(50) ); ALTER TABLE lookup ADD ( CONSTRAINT lookup_pk PRIMARY KEY (id) ); INSERT INTO lookup (id, description) VALUES (1, 'ONE'); INSERT INTO lookup (id, description) VALUES (2, 'TWO'); INSERT INTO lookup (id, description) VALUES (3, 'THREE'); COMMIT; -- Create and populate a larger table that we will later partition. CREATE TABLE big_table ( id NUMBER(10), created_date DATE, lookup_id NUMBER(10), data VARCHAR2(50) ); DECLARE l_lookup_id lookup.id%TYPE; l_create_date DATE; BEGIN FOR i IN 1 .. 1000000 LOOP IF MOD(i, 3) = 0 THEN l_create_date := ADD_MONTHS(SYSDATE, -24); l_lookup_id := 2; ELSIF MOD(i, 2) = 0 THEN l_create_date := ADD_MONTHS(SYSDATE, -12); l_lookup_id := 1; ELSE l_create_date := SYSDATE; l_lookup_id := 3; END IF; INSERT INTO big_table (id, created_date, lookup_id, data) VALUES (i, l_create_date, l_lookup_id, 'This is some data for ' || i); END LOOP; COMMIT; END; / -- Apply some constraints to the table. ALTER TABLE big_table ADD ( CONSTRAINT big_table_pk PRIMARY KEY (id) ); CREATE INDEX bita_created_date_i ON big_table(created_date); CREATE INDEX bita_look_fk_i ON big_table(lookup_id); ALTER TABLE big_table ADD ( CONSTRAINT bita_look_fk FOREIGN KEY (lookup_id) REFERENCES lookup(id) ); -- Gather statistics on the schema objects EXEC DBMS_STATS.gather_table_stats(USER, 'LOOKUP', cascade => TRUE); EXEC DBMS_STATS.gather_table_stats(USER, 'BIG_TABLE', cascade => TRUE);
Next we create a new table with the appropriate partition structure to act as the destination table. The destination must have the same constraints and indexes defined.
-- Create partitioned table. CREATE TABLE big_table2 ( id NUMBER(10), created_date DATE, lookup_id NUMBER(10), data VARCHAR2(50) ) PARTITION BY RANGE (created_date) (PARTITION big_table_2007 VALUES LESS THAN (MAXVALUE)); -- Add new keys, FKs and triggers. ALTER TABLE big_table2 ADD ( CONSTRAINT big_table_pk2 PRIMARY KEY (id) ); CREATE INDEX bita_created_date_i2 ON big_table2(created_date) LOCAL; CREATE INDEX bita_look_fk_i2 ON big_table2(lookup_id) LOCAL; ALTER TABLE big_table2 ADD ( CONSTRAINT bita_look_fk2 FOREIGN KEY (lookup_id) REFERENCES lookup(id) );
With this destination table in place we can start the conversion.
We now switch the segments associated with the source table and the partition in the destination table using the EXCHANGE PARTITION
syntax.
ALTER TABLE big_table2 EXCHANGE PARTITION big_table_2007 WITH TABLE big_table WITHOUT VALIDATION UPDATE GLOBAL INDEXES;
The exchange operation should not be affected by the size of the segments involved.
Once this is complete we can drop the old table and rename the new table and all it's constraints.
DROP TABLE big_table; RENAME big_table2 TO big_table; ALTER TABLE big_table RENAME CONSTRAINT big_table_pk2 TO big_table_pk; ALTER TABLE big_table RENAME CONSTRAINT bita_look_fk2 TO bita_look_fk; ALTER INDEX big_table_pk2 RENAME TO big_table_pk; ALTER INDEX bita_look_fk_i2 RENAME TO bita_look_fk_i; ALTER INDEX bita_created_date_i2 RENAME TO bita_created_date_i;
Next, we split the single large partition into smaller partitions as required.
ALTER TABLE big_table SPLIT PARTITION big_table_2007 AT (TO_DATE('31-DEC-2005 23:59:59', 'DD-MON-YYYY HH24:MI:SS')) INTO (PARTITION big_table_2005, PARTITION big_table_2007) UPDATE GLOBAL INDEXES; ALTER TABLE big_table SPLIT PARTITION big_table_2007 AT (TO_DATE('31-DEC-2006 23:59:59', 'DD-MON-YYYY HH24:MI:SS')) INTO (PARTITION big_table_2006, PARTITION big_table_2007) UPDATE GLOBAL INDEXES; EXEC DBMS_STATS.gather_table_stats(USER, 'BIG_TABLE', cascade => TRUE);
The following queries show that the partitioning was successful.
SELECT partitioned FROM user_tables WHERE table_name = 'BIG_TABLE'; PAR --- YES 1 row selected. SELECT partition_name, num_rows FROM user_tab_partitions WHERE table_name = 'BIG_TABLE'; PARTITION_NAME NUM_ROWS ------------------------------ ---------- BIG_TABLE_2005 335326 BIG_TABLE_2006 332730 BIG_TABLE_2007 334340 3 rows selected.