Lead2Pass: Latest Free Oracle 1Z0-060 Dumps (71-80) Download!

QUESTION 71
To enable the Database Smart Flash Cache, you configure the following parameters:
DB_FLASH_CACHE_FILE = `/dev/flash_device_1′ , `/dev/flash_device_2′
DB_FLASH_CACHE_SIZE=64G
What is the result when you start up the database instance?

A.    It results in an error because these parameter settings are invalid.
B.    One 64G flash cache file will be used.
C.    Two 64G flash cache files will be used.
D.    Two 32G flash cache files will be used.

Answer: B

QUESTION 72
You executed this command to create a password file:
$ orapwd file = orapworcl entries = 10 ignorecase = N
Which two statements are true about the password file?

A.    It will permit the use of uppercase passwords for database users who have been granted the SYSOPER role.
B.    It contains username and passwords of database users who are members of the OSOPER operating
system group.
C.    It contains usernames and passwords of database users who are members of the OSDBA operating
system group.
D.    It will permit the use of lowercase passwords for database users who have granted the SYSDBA role.
E.    It will not permit the use of mixed case passwords for the database users who have been granted the
SYSDBA role.

Answer: AD
Explanation:
*You can create a password file using the password file creation utility, ORAPWD.
* Adding Users to a Password File
When you grant SYSDBA or SYSOPER privileges to a user, that user’s name and privilege information are added to the password file. If the server does not have an EXCLUSIVE password file (that is, if the initialization parameter REMOTE_LOGIN_PASSWORDFILE is NONE or SHARED, or the password file is missing), Oracle Database issues an error if you attempt to grant these privileges.
A user’s name remains in the password file only as long as that user has at least one of these two privileges. If you revoke both of these privileges, Oracle Database removes the user from the password file.
*The syntax of the ORAPWD command is as follows:
ORAPWD FILE=filename [ENTRIES=numusers]
[FORCE={Y|N}] [IGNORECASE={Y|N}] [NOSYSDBA={Y|N}]
*IGNORECASE
If this argument is set to y, passwords are case-insensitive. That is, case is ignored when comparing the password that the user supplies during login with the password in the password file.

QUESTION 73
Identify three valid methods of opening, pluggable databases (PDBs).

A.    ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from the root
B.    ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from a PDB
C.    ALTER PLUGGABLE DATABASE PDB OPEN issued from the seed
D.    ALTER DATABASE PDB OPEN issued from the root
E.    ALTER DATABASE OPEN issued from that PDB
F.    ALTER PLUGGABLE DATABASE PDB OPEN issued from another PDB
G.    ALTER PLUGGABLE DATABASE OPEN issued from that PDB

Answer: AEG
Explanation:
E:You can perform all ALTER PLUGGABLE DATABASE tasks by connecting to a PDB and running the corresponding ALTER DATABASE statement. This functionality is provided to maintain backward compatibility for applications that have been migrated to a CDB environment.
AG:When you issue an ALTER PLUGGABLE DATABASE OPEN statement, READ WRITE is the default unless a PDB being opened belongs to a CDB that is used as a physical standby database, in which case READ ONLY is the default.
You can specify which PDBs to modify in the following ways:
List one or more PDBs.
Specify ALL to modify all of the PDBs.
Specify ALL EXCEPT to modify all of the PDBs, except for the PDBs listed.

QUESTION 74
You administer an online transaction processing (OLTP) system whose database is stored in Automatic Storage Management (ASM) and whose disk group use normal redundancy.
One of the ASM disks goes offline, and is then dropped because it was not brought online before DISK_REPAIR_TIME elapsed.
When the disk is replaced and added back to the disk group, the ensuing rebalance operation is too slow.
Which two recommendations should you make to speed up the rebalance operation if this type of failure happens again?

A.    Increase the value of the ASM_POWER_LIMIT parameter.
B.    Set the DISK_REPAIR_TIME disk attribute to a lower value.
C.    Specify the statement that adds the disk back to the disk group.
D.    Increase the number of ASMB processes.
E.    Increase the number of DBWR_IO_SLAVES in the ASM instance.

Answer: AD
Explanation:
A:ASM_POWER_LIMIT specifies the maximum power on an Automatic Storage Management instance for disk rebalancing. The higher the limit, the faster rebalancing will complete. Lower values will take longer, but consume fewer processing and I/O resources.
D:
*Normally a separate process is fired up to do that rebalance. This will take a certain amount of time. If you want it to happen faster, fire up more processes. You tell ASM it can add more processes by increasing the rebalance power.
*ASMB
ASM Background Process
Communicates with the ASM instance, managing storage and providing statistics
Incorrect:
Not B: A higher, not a lower, value ofDISK_REPAIR_TIMEwould be helpful here. Not E:If you implement database writer I/O slaves by setting the DBWR_IO_SLAVES parameter, you configure a single (master) DBWR process that has slave processes that are subservient to it. In addition, I/O slaves can be used to “simulate” asynchronous I/O on platforms that do not support asynchronous I/O or implement it inefficiently. Database I/O slaves provide non-blocking, asynchronous requests to simulate asynchronous I/O.

QUESTION 75
You are administering a database and you receive a requirement to apply the following restrictions:
1. A connection must be terminated after four unsuccessful login attempts by user.
2. A user should not be able to create more than four simultaneous sessions.
3. User session must be terminated after 15 minutes of inactivity.
4. Users must be prompted to change their passwords every 15 days.
How would you accomplish these requirements?

A.    by granting a secure application role to the users
B.    by creating and assigning a profile to the users and setting the REMOTE_OS_AUTHENT parameter
to FALSE
C.    By creating and assigning a profile to the users and setting the SEC_MAX_FAILED_LOGIN_ATTEMPTS
parameter to 4
D.    By Implementing Fine-Grained Auditing (FGA) and setting the REMOTE_LOGIN_PASSWORD_FILE
parameter to NONE.
E.    By implementing the database resource Manager plan and setting the SEC_MAX_FAILED_LOGIN_ATTEMPTS
parameters to 4.

Answer: A
Explanation:
You can design your applications to automatically grant a role to the user who is trying to log in, provided the user meets criteria that you specify. To do so, you create a secure application role, which is a role that is associated with a PL/SQL procedure (or PL/SQL package that contains multiple procedures). The procedure validates the user: if the user fails the validation, then the user cannot log in. If the user passes the validation, then the procedure grants the user a role so that he or she can use the application. The user has this role only as long as he or she is logged in to the application. When the user logs out, the role is revoked.
Incorrect:
Not B:REMOTE_OS_AUTHENT specifies whether remote clients will be authenticated with the value of the OS_AUTHENT_PREFIX parameter.
Not C, not E:SEC_MAX_FAILED_LOGIN_ATTEMPTS specifies the number of authentication attempts that can be made by a client on a connection to the server process. After the specified number of failure attempts, the connection will be automatically dropped by the server process. Not D:REMOTE_LOGIN_PASSWORDFILE specifies whether Oracle checks for a password file.
Values:
shared
One or more databases can use the password file. The password file can contain SYS as well as non-SYS users.
exclusive
The password file can be used by only one database. The password file can contain SYS as well as non-SYS users.
none
Oracle ignores any password file. Therefore, privileged users must be authenticated by the
operating system.
Note:
The REMOTE_OS_AUTHENT parameter is deprecated. It is retained for backward compatibility only.

QUESTION 76
A senior DBA asked you to execute the following command to improve performance:
SQL> ALTER TABLE subscribe log STORAGE (BUFFER_POOL recycle);
You checked the data in the SUBSCRIBE_LOG table and found that it is a large table containing one million rows.
What could be a reason for this recommendation?

A.    The keep pool is not configured.
B.    Automatic Workarea Management is not configured.
C.    Automatic Shared Memory Management is not enabled.
D.    The data blocks in the SUBSCRIBE_LOG table are rarely accessed.
E.    All the queries on the SUBSCRIBE_LOG table are rewritten to a materialized view.

Answer: D
Explanation:
The most of the rows in SUBSCRIBE_LOG table are accessed once a week.

QUESTION 77
Which three tasks can be automatically performed by the Automatic Data Optimization feature of Information lifecycle Management (ILM)?

A.    Tracking the most recent read time for a table segment in a user tablespace
B.    Tracking the most recent write timefor a table segmentin a usertablespace
C.    Tracking insert time by row for table rows
D.    Tracking the most recent write time for a table block
E.    Tracking the most recent read time for a table segment in the SYSAUX tablespace
F.    Tracking the most recent write time for a table segment in the SYSAUX tablespace

Answer: ABC
Explanation:
*You can specify policies for ADO at the row, segment, and tablespace level when creating and altering tables with SQL statements.
* (Not E, Not F)When Heat Map is enabled, all accesses are tracked by the in-memory activity tracking module. Objects in the SYSTEM and SYSAUX tablespaces are not tracked.
*To implement your ILM strategy, you can use Heat Map in Oracle Database to track data access and modification.
Heat Map provides data access tracking at the segment-level and data modification tracking at the segment and row level.
*To implement your ILM strategy, you can use Heat Map in Oracle Database to track data access and modification. You can also use Automatic Data Optimization (ADO) to automate the compression and movement of data between different tiers of storage within the database.

QUESTION 78
Which two partitioned table maintenance operations support asynchronous Global Index Maintenance in Oracle database 12c?

A.    ALTER TABLE SPLIT PARTITION
B.    ALTER TABLE MERGE PARTITION
C.    ALTER TABLE TRUNCATE PARTITION
D.    ALTER TABLE ADD PARTITION
E.    ALTER TABLE DROP PARTITION
F.    ALTER TABLE MOVE PARTITION

Answer: CE
Explanation:
Asynchronous Global Index Maintenance for DROP and TRUNCATE PARTITION This feature enables global index maintenance to be delayed and decoupled from a DROP and TRUNCATE partition without making a global index unusable. Enhancements include faster DROP and TRUNCATE partition operations and the ability to delay index maintenance to off-peak time.

QUESTION 79
You configure your database Instance to support shared server connections.
Which two memory areas that are part of PGA are stored in SGA instead, for shared server connection?

A.    User session data
B.    Stack space
C.    Private SQL area
D.    Location of the runtime area for DML and DDL Statements
E.    Location of a part of the runtime area for SELECT statements

Answer: AC
Explanation:
A: PGA itself is subdivided. The UGA (User Global Area) contains session state information, including stuff like package-level variables, cursor state, etc. Note that, with shared server, the UGA is in the SGA. It has to be, because shared server means that the session state needs to be accessible to all server processes, as any one of them could be assigned a particular session. However, with dedicated server (which likely what you’re using), the UGA is allocated in the PGA.
C: The Location of a private SQL area depends on the type of connection established for a session. If a session is connected through a dedicated server, private SQL areas are located in the server process’ PGA. However, if a session is connected through a shared server, part of the private SQL area is kept in the SGA.
Note:
*System global area (SGA)
The SGA is a group of shared memory structures, known asSGA components, that contain data and control information for one Oracle Database instance. The SGA is shared by all server and background processes. Examples of data stored in the SGA include cached data blocks and shared SQL areas.
* Program global area (PGA)
A PGA is a memory region that contains data and control information for a server process. It is nonshared memory created by Oracle Database when a server process is started. Access to the PGA is exclusive to the server process. There is one PGA for each server process. Background processes also allocate their own PGAs. The total memory used by all individual PGAs is known as the total instance PGA memory, and the collection of individual PGAs is referred to as the total instance PGA, or just instance PGA. You use database initialization parameters to set the size of the instance PGA, not individual PGAs.
Reference: Oracle Database Concepts 12c

QUESTION 80
Which two statements are true about Oracle Managed Files (OMF)?

A.    OMF cannot be used in a database that already has data files created with user-specified directions.
B.    The file system directions that are specified by OMF parameters are created automatically.
C.    OMF can be used with ASM disk groups, as well as with raw devices, for better file management.
D.    OMF automatically creates unique file names for table spaces and control files.
E.    OMF may affect the location of the redo log files and archived log files.

Answer: BD
Explanation:
B:Through initialization parameters, you specify the file system directory to be used for a particular type of file. The database then ensures that a unique file, an Oracle-managed file, is created and deleted when no longer needed.
D: The database internally uses standard file system interfaces to create and delete files as needed for the following database structures:
Tablespaces
Redo log files
Control files
Archived logs
Block change tracking files
Flashback logs
RMAN backups
Note:
*Using Oracle-managed files simplifies the administration of an Oracle Database. Oracle-managed files eliminate the need for you, the DBA, to directly manage the operating system files that make up an Oracle Database. With Oracle-managed files, you specify file system directories in which the database automatically creates, names, and manages files at the database object level. For example, you need only specify that you want to create a tablespace; you do not need to specify the name and path of the tablespace’s datafile with the DATAFILE clause.

If you want to pass the Oracle 12c 1Z0-060 exam sucessfully, recommend to read latest Oracle 12c 1Z0-060 Dumps full version.

http://www.lead2pass.com/1z0-060.html

Lead2Pass: Latest Free Oracle 1Z0-060 Dumps (61-70) Download!

QUESTION 61
Which three statements are true concerning the multitenant architecture?

A.    Each pluggable database (PDB) has its own set of background processes.
B.    A PDB can have a private temp tablespace.
C.    PDBs can share the sysaux tablespace.
D.    Log switches occur only at the multitenant container database (CDB) level.
E.    Different PDBs can have different default block sizes.
F.    PDBs share a common system tablespace.
G.    Instance recovery is always performed at the CDB level.

Answer: BDG
Explanation:
B:
* A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains other user created tablespaces in it.
* There is one default temporary tablespace for the entire CDB. However, you can create additional temporary tablespaces in individual PDBs.
D:
* There is a single redo log and a single control file for an entire CDB
* A log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is completely filled and writing must continue to the next redo log file.
G: instance recovery
The automatic application of redo log records to uncommitted data blocks when an database instance is restarted after a failure.
Incorrect:
Not A:
* There is one set of background processes shared by the root and all PDBs.
* High consolidation density. The many pluggable databases in a single container database share its memory and background processes, letting you operate many more pluggable databases on a particular platform than you can single databases that use the old architecture.
Not C: There is a separate SYSAUX tablespace for the root and for each PDB.
Not F: There is a separate SYSTEM tablespace for the root and for each PDB.

QUESTION 62
You notice that the elapsed time for an important database scheduler Job is unacceptably long.
The job belongs to a scheduler job class and window.
Which two actions would reduce the job’s elapsed time?

A.    Increasing the priority of the job class to which the job belongs
B.    Increasing the job’s relative priority within the Job class to which it belongs
C.    Increasing the resource allocation for the consumer group mapped to the scheduler job’s job class within
the plan mapped to the scheduler window
D.    Moving the job to an existing higher priority scheduler window with the same schedule and duration
E.    Increasing the value of the JOB_QUEUE_PROCESSES parameter
F.    Increasing the priority of the scheduler window to which the job belongs

Answer: BC
Explanation:
B:Job priorities are used only to prioritize among jobs in thesame class.
Note:Group jobs for prioritization
Within the same job class, you can assign priority values of 1-5 to individual jobs so that if two jobs in the class are scheduled to start at the same time, the one with the higher priority takes precedence. This ensures that you do not have a less important job preventing the timely completion of a more important one.
C:Set resource allocation for member jobs
Job classes provide the link between the Database Resource Manager and the Scheduler, because each job class can specify a resource consumer group as an attribute. Member jobs then belong to the specified consumer group and are assigned resources according to settings in the current resource plan.

QUESTION 63
You plan to migrate your database from a File system to Automata Storage Management (ASM) on same platform.
Which two methods or commands would you use to accomplish this task?

A.    RMAN CONVERT command
B.    Data Pump Export and import
C.    Conventional Export and Import
D.    The BACKUP AS COPY DATABASE . . . command of RMAN
E.    DBMS_FILE_TRANSFER with transportable tablespace

Answer: AD
Explanation:
A:
1. Get the list of all datafiles.
Note:RMAN Backup of ASM Storage
There is often a need to move the files from the file system to the ASM storage and vice versa. This may come in handy when one of the file systems is corrupted by some means and then the file may need to be moved to the other file system.
D:Migrating a Database into ASM
*To take advantage of Automatic Storage Management with an existing database you must migrate that database into ASM. This migration is performed using Recovery Manager (RMAN) even if you are not using RMAN for your primary backup and recovery strategy.
* Example:
Back up your database files as copies to the ASM disk group.
BACKUP AS COPY INCREMENTAL LEVEL 0 DATABASE
FORMAT ‘+DISK’ TAG ‘ORA_ASM_MIGRATION’;

QUESTION 64
You run a script that completes successfully using SQL*Plus that performs these actions:
1. Creates a multitenant container database (CDB)
2. Plugs in three pluggable databases (PDBs)
3. Shuts down the CDB instance
4. Starts up the CDB instance using STARTUP OPEN READ WRITE
Which two statements are true about the outcome after running the script?

A.    The seed will be in mount state.
B.    The seed will be opened read-only.
C.    The seed will be opened read/write.
D.    The other PDBs will be in mount state.
E.    The other PDBs will be opened read-only.
F.    The PDBs will be opened read/write.

Answer: BD
Explanation:
B: The seed is always read-only.
D:Pluggable databases can be started and stopped using SQL*Plus commands or the ALTER PLUGGABLE DATABASE command.

QUESTION 65
You execute the following piece of code with appropriate privileges:

clip_image002
User SCOTT has been granted the CREATE SESSION privilege and the MGR role.
Which two statements are true when a session logged in as SCOTT queries the SAL column in the view and the table?

A.    Data is redacted for the EMP.SAL column only if the SCOTT session does not have the MGR role set.
B.    Data is redacted for EMP.SAL column only if the SCOTT session has the MGR role set.
C.    Data is never redacted for the EMP_V.SAL column.
D.    Data is redacted for the EMP_V.SAL column only if the SCOTT session has the MGR role set.
E.    Data is redacted for the EMP_V.SAL column only if the SCOTT session does not have the MGR role set.

Answer: AC
Explanation:
Note:
*DBMS_REDACT.FULL completely redacts the column data.
*DBMS_REDACT.NONE applies no redaction on the column data. Use this function for development testing purposes. LOB columns are not supported. *The DBMS_REDACT package provides an interface to Oracle Data Redaction, which enables you to mask (redact) data that is returned from queries issued by low-privileged users or an application.
*If you create a view chain (that is, a view based on another view), then the Data Redaction policy also applies throughout this view chain. The policies remain in effect all of the way up through this view chain, but if another policy is created for one of these views, then for the columns affected in the subsequent views, this new policy takes precedence.

QUESTION 66
Your database is open and the LISTENER listener running. You stopped the wrong listener LISTENER by issuing the following command:
1snrctl > STOP
What happens to the sessions that are presently connected to the database Instance?

A.    They are able to perform only queries.
B.    They are not affected and continue to function normally.
C.    They are terminated and the active transactions are rolled back.
D.    They are not allowed to perform any operations until the listener LISTENER is started.

Answer: B
Explanation:
Thelistener is used when the connection is established. The immediate impact of stopping the listener will be that no new session can be established from a remote host. Existing sessions are not compromised.

QUESTION 67
Which three statements are true about using flashback database in a multitenant container database (CDB)?

A.    The root container can be flashed back without flashing back the pluggable databases (PDBs).
B.    To enable flashback database, the CDB must be mounted.
C.    Individual PDBs can be flashed back without flashing back the entire CDB.
D.    The DB_FLASHBACK RETENTION_TARGET parameter must be set to enable flashback of the CDB.
E.    A CDB can be flashed back specifying the desired target point in time or an SCN, but not a restore point.

Answer: CDE
Explanation:
C: *RMAN provides support for point-in-time recovery for one or more pluggable databases (PDBs). The process of performing recovery is similar to that of DBPITR. You use the RECOVER command to perform point-in-time recovery of one or more PDBs. However, to recover PDBs, you must connect to the root as a user with SYSDBA or SYSBACKUP privilege
D:DB_FLASHBACK_RETENTION_TARGET specifies the upper limit (in minutes) on how far back in time the database may be flashed back. How far back one can flashback a database depends on how much flashback data Oracle has kept in the flash recovery area.
Range of values0 to 231 – 1

QUESTION 68
You execute the following PL/SQL:

clip_image001
Which two statements are true?

A.    Fine-Grained Auditing (FGA) is enabled for the PRICE column in the PRODUCTS table for SELECT
statements only when a row with PRICE > 10000 is accessed.
B.    FGA is enabled for the PRODUCTS.PRICE column and an audit record is written whenever a row with
PRICE > 10000 is accessed.
C.    FGA is enabled for all DML operations by JIMon the PRODUCTS.PRICE column.
D.    FGA is enabled for the PRICE column of the PRODUCTS table and the SQL statements is captured in
the FGA audit trial.

Answer: AB
Explanation:
DBMS_FGA.add_policy
*The DBMS_FGA package provides fine-grained security functions.
*ADD_POLICY Procedure
This procedure creates an audit policy using the supplied predicate as the audit condition.
Incorrect:
Not C: object_schema
The schema of the object to be audited. (If NULL, the current log-on user schema is assumed.)

QUESTION 69
You execute the following commands to audit database activities:
SQL > ALTER SYSTEM SET AUDIT_TRIAL=DB, EXTENDED SCOPE=SPFILE;
SQL > AUDIT SELECT TABLE, INSERT TABLE, DELETE TABLE BY JOHN By SESSION WHENEVER SUCCESSFUL;
Which statement is true about the audit record that generated when auditing after instance restarts?

A.    One audit record is created for every successful execution of a SELECT, INSERT OR DELETE command
on a table, and contains the SQL text for the SQL Statements.
B.    One audit record is created for every successful execution of a SELECT, INSERT OR DELETE command,
and contains the execution plan for the SQL statements.
C.    One audit record is created for the whole session if john successfully executes a SELECT, INSERT, or
DELETE command, and contains the execution plan for the SQL statements.
D.    One audit record is created for the whole session if JOHN successfully executes a select command, and
contains the SQL text and bind variables used.
E.    One audit record is created for the whole session if john successfully executes a SELECT, INSERT, or
DELETE command on a table, and contains the execution plan, SQL text, and bind variables used.

Answer: A
Explanation:
Note:
*BY SESSION
In earlier releases, BY SESSION caused the database to write a single record for all SQL statements or operations of the same type executed on the same schema objects in the same session. Beginning with this release(11g)of Oracle Database, both BY SESSION and BY ACCESS cause Oracle Database to write one audit record for each audited statement and operation.
*BY ACCESS
Specify BY ACCESS if you want Oracle Database to write one record for each audited statement and operation.
Note:
If you specify either a SQL statement shortcut or a system privilege that audits a data definition language (DDL) statement, then the database always audits by access. In all other cases, the database honors the BY SESSION or BY ACCESS specification.
*For each audited operation, Oracle Database produces an audit record containing this information:
/The user performing the operation
/The type of operation
/The object involved in the operation
/The date and time of the operation

» Read more

Lead2Pass: Latest Free Oracle 1Z0-060 Dumps (51-60) Download!

QUESTION 51
You executed a DROP USER CASCADE on an Oracle 11g release 1 database and immediately realized that you forgot to copy the OCA.EXAM_RESULTS table to the OCP schema.
The RECYCLE_BIN enabled before the DROP USER was executed and the OCP user has been granted the FLASHBACK ANY TABLE system privilege.
What is the quickest way to recover the contents of the OCA.EXAM_RESULTS table to the OCP schema?

A.    Execute FLASHBACK TABLE OCA.EXAM_RESULTS TO BEFORE DROP RENAME TO
OCP.EXAM_RESULTS; connected as SYSTEM.
B.    Recover the table using traditional Tablespace Point In Time Recovery.
C.    Recover the table using Automated Tablespace Point In Time Recovery.
D.    Recovery the table sing Database Point In Time Recovery.
E.    Execute FLASHBACK TABLE OCA.EXAM_RESULTS TO BEFORE DROP RENAME TO
EXAM_RESULTS; connected as the OCP user.

Answer: E
Explanation:
*To flash back a table to an earlier SCN or timestamp, you must have either the FLASHBACK object privilege on the table or the FLASHBACK ANY TABLE system privilege.
* From question:the OCP user has been granted the FLASHBACK ANY TABLE system privilege.
*Syntax
flashback_table::=

clip_image002

QUESTION 52
In your multitenant container database (CDB) containing pluggable database (PDBs), the HR user executes the following commands to create and grant privileges on a procedure:
CREATE OR REPLACE PROCEDURE create_test_v (v_emp_id NUMBER, v_ename
VARCHAR2, v_SALARY NUMBER, v_dept_id NUMBER)
BEGIN
INSERT INTO hr.test VALUES (V_emp_id, V_ename, V_salary, V_dept_id);
END;
/
GRANT EXECUTE ON CREATE_TEST TO john, jim, smith, king;
How can you prevent users having the EXECUTE privilege on the CREATE_TEST procedure from inserting values into tables on which they do not have any privileges?

A.    Create the CREATE_TEST procedure with definer’s rights.
B.    Grant the EXECUTE privilege to users with GRANT OPTION on the CREATE_TEST procedure.
C.    Create the CREATE_TEST procedure with invoker’s rights.
D.    Create the CREATE_TEST procedure as part of a package and grant users the EXECUTE privilege
the package.

Answer: C
Explanation:
If a program unit does not need to be executed with the escalated privileges of the definer, you should specify that the program unit executes with the privileges of the caller, also known as the invoker. Invoker’s rights can mitigate the risk of SQL injection.
Incorrect:
Not A: By default, stored procedures and SQL methods execute with the privileges of their owner, not their current user. Such definer-rights subprograms are bound to the schema in which they reside.
not B:Using the GRANT option, a user can grant an Object privilege to another user or to PUBLIC.

QUESTION 53
You created a new database using the “create database” statement without specifying the “ENABLE PLUGGABLE” clause.
What are two effects of not using the “ENABLE PLUGGABLE database” clause?

A.    The database is created as a non-CDB and can never contain a PDB.
B.    The database is treated as a PDB and must be plugged into an existing multitenant container database (CDB).
C.    The database is created as a non-CDB and can never be plugged into a CDB.
D.    The database is created as a non-CDB but can be plugged into an existing CDB.
E.    The database is created as a non-CDB but will become a CDB whenever the first PDB is plugged in.

Answer: AD
Explanation:
A (not B,not E): The CREATE DATABASE … ENABLE PLUGGABLE DATABASE SQL statement creates a new CDB. If you do not specify the ENABLE PLUGGABLE DATABASE clause, then the newly created database is a non-CDB and can never contain PDBs.
D: You can create a PDB by plugging in a Non-CDB as a PDB. The following graphic depicts the options for creating a PDB:

clip_image001
Description of cncpt358.png follows
Incorrect:
Not E:For the duration of its existence, a database is either a CDB or a non-CDB. You cannot transform a non-CDB into a CDB or vice versa. You must define a database as a CDB at creation, and then create PDBs within this CDB.

QUESTION 54
What is the effect of specifying the “ENABLE PLUGGABLE DATABASE” clause in a “CREATE DATABASE” statement?

A.    It will create a multitenant container database (CDB) with only the root opened.
B.    It will create a CDB with root opened and seed read only.
C.    It will create a CDB with root and seed opened and one PDB mounted.
D.    It will create a CDB that must be plugged into an existing CDB.
E.    It will create a CDB with root opened and seed mounted.

Answer: B
Explanation:
*The CREATE DATABASE … ENABLE PLUGGABLE DATABASE SQL statement creates a new CDB. If you do not specify the ENABLE PLUGGABLE DATABASE clause, then the newly created database is a non-CDB and can never contain PDBs.
Along with the root (CDB$ROOT), Oracle Database automatically creates a seed PDB (PDB$SEED). The following graphic shows a newly created CDB:

clip_image001[4]
*Creating a PDB
Rather than constructing the data dictionary tables that define an empty PDB from scratch, and then populating its Obj$ and Dependency$ tables, the empty PDB is created when the CDB is created. (Here, we use empty to mean containing no customer-created artifacts.) It is referred to as the seed PDB and has the name PDB$Seed. Every CDB non-negotiably contains a seed PDB; it is non-negotiably always open in read-only mode. This has no conceptual significance; rather, it is just an optimization device. The create PDB operation is implemented as a special case of the clone PDB operation.

QUESTION 55
You have installed two 64G flash devices to support the Database Smart Flash Cache feature on your database server that is running on Oracle Linux.
You have set the DB_SMART_FLASH_FILE parameter:
DB_FLASH_CACHE_FILE= `/dev/flash_device_1 `,’ /dev/flash_device_2′
How should the DB_FLASH_CACHE_SIZE be configured to use both devices?

A.    Set DB_FLASH_CACHE_ZISE = 64G.
B.    Set DB_FLASH_CACHE_ZISE = 64G, 64G
C.    Set DB_FLASH_CACHE_ZISE = 128G.
D.    DB_FLASH_CACHE_SIZE is automatically configured by the instance at startup.

Answer: B
Explanation:
* Smart Flash Cache concept is not new in Oracle 12C – DB Smart Flash Cache in Oracle 11g.
In this release Oracle has made changes related to both initialization parameters used by DB Smart Flash cache. Now you can define many files|devices and its sizes for “Database Smart Flash Cache” area. In previous releases only one file|device could be defined.
DB_FLASH_CACHE_FILE = /dev/sda, /dev/sdb, /dev/sdc
DB_FLASH_CACHE_SIZE = 32G, 32G, 64G
So above settings defines 3 devices which will be in use by “DB Smart Flash Cache”
/dev/sda ?size 32G
/dev/sdb ?size 32G
/dev/sdc ?size 64G
New view V$FLASHFILESTAT ?it’s used to determine the cumulative latency and read counts of each file|device and compute the average latency

QUESTION 56
Examine the following parameters for a database instance:
MEMORY_MAX_TARGET=0
MEMORY_TARGET=0
SGA_TARGET=0
PGA_AGGREGATE_TARGET=500m
Which three initialization parameters are not controlled by Automatic Shared Memory Management (ASMM)?

A.    LOG_BUFFER
B.    SORT_AREA_SIZE
C.    JAVA_POOL_SIZE
D.    STREAMS_POOL_SIZE
E.    DB_16K_CACHE_SZIE
F.    DB_KEEP_CACHE_SIZE

Answer: AEF
Explanation:
Manually Sized SGA Components that Use SGA_TARGET Space SGA Component,Initialization Parameter
/The log buffer
LOG_BUFFER
/The keep and recycle buffer caches
DB_KEEP_CACHE_SIZE
DB_RECYCLE_CACHE_SIZE
/Nonstandard block size buffer caches
DB_nK_CACHE_SIZE
Note:
* In addition to setting SGA_TARGET to a nonzero value, you must set to zero all initialization parameters listed inthe table belowto enable full automatic tuning of the automatically sized SGA components.
* Table,Automatically Sized SGA Components and Corresponding Parameters

QUESTION 57
Examine the contents of SQL loader control file:
Which three statements are true regarding the SQL* Loader operation performed using the control file?

clip_image001[6]

A.    An EMP table is created if a table does not exist. Otherwise, if the EMP table is appended with the loaded data.
B.    The SQL* Loader data file myfile1.dat has the column names for the EMP table.
C.    The SQL* Loader operation fails because no record terminators are specified.
D.    Field names should be the first line in the both the SQL* Loader data files.
E.    The SQL* Loader operation assumes that the file must be a stream record format file with the normal carriage
return string as the record terminator.

Answer: ABE
Explanation:
A:The APPEND keyword tells SQL*Loader to preserve any preexisting data in the table. Other options allow you to delete preexisting data, or to fail with an error if the table is not empty to begin with.
B(not D):
Note:
*SQL*Loader-00210: first data file is empty, cannot process the FIELD NAMES record Cause: The data file listed in the next message was empty. Therefore, the FIELD NAMES FIRST FILE directive could not be processed.
Action: Check the listed data file and fix it. Then retry the operation
E:
*A comma-separated values (CSV) (also sometimes called character-separated values, because the separator character does not have to be a comma) file stores tabular data (numbers and text) in plain-text form. Plain text means that the file is a sequence of characters, with no data that has to be interpreted instead, as binary numbers. A CSV file consists of any number of records, separated by line breaks of some kind; each record consists of fields, separated by some other character or string, most commonly a literal comma or tab. Usually, all records have an identical sequence of fields.
*Fields with embedded commas must be quoted.
Example:
1997,Ford,E350,”Super, luxurious truck”
Note:
*SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle database.

QUESTION 58
In your multitenant container database (CDB) containing pluggable database (PDBs), you granted the CREATE TABLE privilege to the common user C # # A_ADMIN in root and all PDBs. You execute the following command from the root container:
SQL > REVOKE create table FROM C # # A_ADMIN;
What is the result?

A.    It executes successfully and the CREATE TABLE privilege is revoked from C # # A_ADMIN in root only.
B.    It fails and reports an error because the CONTAINER=ALL clause is not used.
C.    It excludes successfully and the CREATE TABLE privilege is revoked from C # # A_ADMIN in root and all PDBs.
D.    It fails and reports an error because the CONTAINER=CURRENT clause is not used.
E.    It executes successfully and the CREATE TABLE privilege is revoked from C # # A_ADMIN in all PDBs.

Answer: A
Explanation:
REVOKE ..FROM
If the current container is the root:
/Specify CONTAINER = CURRENT to revoke a locally granted system privilege, object privilege, or role from a common user or common role. The privilege or role is revoked from the user or role only in the root. This clause does not revoke privileges granted with CONTAINER = ALL.
/Specify CONTAINER = ALL to revoke a commonly granted system privilege, object privilege on a common object, or role from a common user or common role. The privilege or role is revoked from the user or role across the entire CDB. This clause can revoke only a privilege or role granted with CONTAINER = ALL from the specified common user or common role. This clause does not revoke privileges granted locally with CONTAINER = CURRENT. However, any locally granted privileges that depend on the commonly granted privilege being revoked are also revoked.
If you omit this clause, then CONTAINER = CURRENT is the default.

QUESTION 59
Which two statements are true concerning the Resource Manager plans for individual pluggable databases (PDB plans) in a multitenant container database (CDB)?

A.    If no PDB plan is enabled for a pluggable database, then all sessions for that PDB are treated to an
equal degree of the resource share of that PDB.
B.    In a PDB plan, subplans may be used with up to eight consumer groups.
C.    If a PDB plan is enabled for a pluggable database, then resources are allocated toconsumergroups
across all PDBs in the CDB.
D.    If no PDB plan is enabled for a pluggable database, then the PDB share in the CDB plan is dynamically
calculated.
E.    If a PDB plan is enabled for a pluggable database, then resources are allocated to consumer groups
based on the shares provided to the PDB in the CDB plan and the shares provided to the consumer
groups in the PDB plan.

Answer: ADE
Explanation:
A:Setting a PDB resource plan is optional. If not specified, allsessions within the PDB are treated equally.
*
In a non-CDB database, workloads within a databaseare managed with resource plans. In a PDB, workloads are also managed with resourceplans, also called PDB resource plans. The functionality is similar except for the followingdifferences:
/Non-CDBDatabase
Multi-level resource plans
Up to 32 consumer groups
Subplans
/PDBDatabase
Single-level resource plansonly
Up to 8 consumer groups
(not B)No subplans
Incorrect Not C

QUESTION 60
You use a recovery catalog for maintaining your database backups.
You execute the following command:
$rman TARGET / CATALOG rman / cat@catdb
RMAN > BACKUP VALIDATE DATABASE ARCHIVELOG ALL;
Which two statements are true?

A.    Corrupted blocks, if any, are repaired.
B.    Checks are performed for physical corruptions.
C.    Checks are performed for logical corruptions.
D.    Checks are performed to confirm whether all database files exist in correct locations
E.    Backup sets containing both data files and archive logs are created.

Answer: BD
Explanation:
B (not C): You can validate that all database files and archived redo logs can be backed up by running a command as follows:
RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL;
This form of the command would check for physical corruption. To check for logical corruption,
RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;
D: You can use the VALIDATE keyword of the BACKUP command to do the following:
Check datafiles for physical and logical corruption
Confirm that all database files exist and are in the correct locations.
Note:
You can use theVALIDATEoption of theBACKUPcommand to verify that database files exist and are in the correct locations (D), and have no physical or logical corruptions that would prevent RMAN from creating backups of them. When performing aBACKUP…VALIDATE, RMAN reads the files to be backed up in their entirety, as it would during a real backup. It does not, however, actually produce any backup sets or image copies (Not A, not E).

If you want to pass the Oracle 12c 1Z0-060 exam sucessfully, recommend to read latest Oracle 12c 1Z0-060 Dumps full version.

http://www.lead2pass.com/1z0-060.html

Lead2Pass: Latest Free Oracle 1Z0-060 Dumps (41-50) Download!

QUESTION 41
You are planning the creation of a new multitenant container database (CDB) and want to store the ROOT and SEED container data files in separate directories.
You plan to create the database using SQL statements.
Which three techniques can you use to achieve this?

A.    Use Oracle Managed Files (OMF).
B.    Specify the SEED FILE_NAME_CONVERT clause.
C.    Specify the PDB_FILE_NAME_CONVERT initialization parameter.
D.    Specify the DB_FILE_NAMECONVERT initialization parameter.
E.    Specify all files in the CREATE DATABASE statement without using Oracle managed Files (OMF).

Answer: ABC
Explanation:
You must specify the names and locations of the seed’s files in one of the following ways:
* (A) Oracle Managed Files
* (B) The SEED FILE_NAME_CONVERT Clause
* (C) The PDB_FILE_NAME_CONVERT Initialization Parameter

QUESTION 42
You are about to plug a multi-terabyte non-CDB into an existing multitenant container database (CDB).
The characteristics of the non-CDB are as follows:
– Version: Oracle Database 11gRelease 2 (11.2.0.2.0) 64-bit
– Character set: AL32UTF8
– National character set: AL16UTF16
– O/S: Oracle Linux 6 64-bit
The characteristics of the CDB are as follows:
– Version: Oracle Database 12c Release 1 64-bit
– Character Set: AL32UTF8
– National character set: AL16UTF16
– O/S: Oracle Linux 6 64-bit
Which technique should you use to minimize down time while plugging this non-CDB into the CDB?

A.    Transportable database
B.    Transportable tablespace
C.    Data Pump full export/import
D.    The DBMS_PDB package
E.    RMAN

Answer: D
Explanation:
*Overview, example:
– Log into ncdb12c as sys
– Get the database in a consistent state by shutting it down cleanly.
– Open the database in read only mode
– Run DBMS_PDB.DESCRIBE to create an XML file describing the database.
– Shut down ncdb12c
– Connect to target CDB (CDB2)
– Check whether non-cdb (NCDB12c) can be plugged into CDB(CDB2)
– Plug-in Non-CDB (NCDB12c) as PDB(NCDB12c) into target CDB(CDB2).
– Access the PDB and run the noncdb_to_pdb.sql script.
– Open the new PDB in read/write mode.
*You can easily plug an Oracle Database 12c non-CDB into a CDB. Just create a PDB manifest file for the non-CDB, and then use the manifest file to create a cloned PDB in the CDB. *Note that to plugin a non-CDB database into a CDB, the non-CDB database needs to be of version 12c as well. So existing 11g databases will need to be upgraded to 12c before they can be part of a 12c CDB.

QUESTION 43
Your database supports an online transaction processing (OLTP) application. The application is undergoing some major schema changes, such as addition of new indexes and materialized views. You want to check the impact of these changes on workload performance.
What should you use to achieve this?

A.    Database replay
B.    SQL Tuning Advisor
C.    SQL Access Advisor
D.    SQL Performance Analyzer
E.    Automatic Workload Repository compare reports

Answer: E
Explanation:
While an AWR report shows AWR data between two snapshots (or two points in time), the AWR Compare Periods report shows the difference between two periods (or two AWR reports with a total of four snapshots). Using the AWR Compare Periods report helps you to identify detailed performance attributes and configuration settings that differ between two time periods.

QUESTION 44
An administrator account is granted the CREATE SESSION and SET CONTAINER system privileges.
A multitenant container database (CDB) instant has the following parameter set:
THREADED_EXECUTION = FALSE
Which four statements are true about this administrator establishing connections to root in a CDB that has been opened in read only mode?

A.    You can conned as a common user by using the connect statement.
B.    You can connect as a local user by using the connect statement.
C.    You can connect by using easy connect.
D.    You can connect by using OS authentication.
E.    You can connect by using a Net Service name.
F.    You can connect as a local user by using the SET CONTAINER statement.

Answer: CDEF
Explanation:
*The choice of threading model is dictated by the THREADED_EXECUTION initialization parameter.
THREADED_EXECUTION=FALSE: The default value causes Oracle to run using the multiprocess model.
THREADED_EXECUTION=TRUE: Oracle runs with the multithreaded model. *OS Authentication is not supported with the multithreaded model.
*THREADED_EXECUTION
When this initialization parameter is set to TRUE, which enables the multithreaded Oracle model, operating system authentication is not supported. Attempts to connect to the database using operating system authentication (for example, CONNECT / AS SYSDBA or CONNECT / ) when this initialization parameter is set to TRUE receive an ORA-01031″insufficient privileges” error.
F:The new SET CONTAINER statement within a call back function:
The advantage of SET CONTAINER is that the pool does not have to create a new connection to a PDB, if there is an exisitng connection to a different PDB. The pool can use the existing connection, and through SET CONTAINER, can connect to the desired PDB. This can be done using:
ALTER SESSION SET CONTAINER=<PDB Name>
This avoids the need to create a new connection from scratch.

QUESTION 45
Examine the following query output:

clip_image002
You issue the following command to import tables into the hr schema:
$ > impdp hr/hr directory = dumpdir dumpfile = hr_new.dmp schemas=hr TRANSFORM=DISABLE_ARCHIVE_LOGGING: Y
Which statement is true?

A.    All database operations performed by the impdp command are logged.
B.    Only CREATE INDEX and CREATE TABLE statements generated by the import are logged.
C.    Only CREATE TABLE and ALTER TABLE statements generated by the import are logged.
D.    None of the operations against the master table used by Oracle Data Pump to coordinate its activities
are logged.

Answer: D
Explanation:
* From the exhibit we see that FORCE_LOGGING is set to NO.
* Datapump Import impdp in 12c includes a new parameter to disable logging during data import. This option could improve performance of import tremendously during large data loads.
The TRANSFORM=DISABLE_ARCHIVE_LOGGING is used to disable logging. The value can be Y or N. Y to disable logging and N to enable logging.
However, if the database is running with FORCE LOGGING enabled, data pump ignores disable logging request.
Note:
* When the primary database is in FORCE LOGGING mode, all database data changes are logged. FORCE LOGGING mode ensures that the standby database remains consistent with the primary database.
* force_logging V$database
A tablespace or the entire database is either in force logging or no force logging mode. To see which it is, run:
SQL> SELECT force_logging FROM v$database;
FOR

NO

QUESTION 46
You notice a performance change in your production Oracle database and you want to know which change has made this performance difference.
You generate the Compare Period Automatic Database Diagnostic Monitor (ADDM) report to further investigation.
Which three findings would you get from the report?

A.    It detects any configuration change that caused a performance difference in both time periods.
B.    It identifies any workload change that caused a performance difference in both time periods.
C.    It detects the top wait events causing performance degradation.
D.    It shows the resource usage for CPU, memory, and I/O in both time periods.
E.    It shows the difference in the size of memory pools in both time periods.
F.    It gives information about statistics collection in both time periods.

Answer: ABE
Explanation:
Keyword: shows the difference.
*Full ADDM analysis across two AWR snapshot periods
Detects causes, measure effects, then correlates them
Causes: workload changes, configuration changes
Effects: regressed SQL, reach resource limits (CPU, I/O, memory, interconnect) Makes actionable recommendations along with quantified impact *Identify what changed
/Configuration changes, workload changes
*Performance degradation of the database occurs when your database was performing optimally in the past, such as 6 months ago, but has gradually degraded to a point where it becomes noticeable to the users. The Automatic Workload Repository (AWR) Compare Periods report enables you to compare database performance between two periods of time.
While an AWR report shows AWR data between two snapshots (or two points in time), the AWR Compare Periods report shows the difference (ABE) between two periods (or two AWR reports with a total of four snapshots). Using the AWR Compare Periods report helps you to identify detailed performance attributes and configuration settings that differ between two time periods.

QUESTION 47
Examine the parameter for your database instance:

clip_image002[4]
You generated the execution plan for the following query in the plan table and noticed that the nested loop join was done. After actual execution of the query, you notice that the hash join was done in the execution plan:
Identify the reason why the optimizer chose different execution plans.

clip_image001

A.    The optimizer used a dynamic plan for the query.
B.    The optimizer chose different plans because automatic dynamic sampling was enabled.
C.    The optimizer used re-optimization cardinality feedback for the query.
D.    The optimizer chose different plan because extended statistics were created for the columns used.

Answer: B
Explanation:
* optimizer_dynamic_sampling
OPTIMIZER_DYNAMIC_SAMPLING controls both when the database gathers dynamic statistics,
and the size of the sample that the optimizer uses to gather the statistics.
Range of values0 to 11

QUESTION 48
Which three statements are true about adaptive SQL plan management?

A.    It automatically performs verification or evolves non-accepted plans, in COMPREHENSIVE mode when
they perform better than existing accepted plans.
B.    The optimizer always uses the fixed plan, if the fixed plan exists in the plan baseline.
C.    It adds new, bettor plans automatically as fixed plans to the baseline.
D.    The non-accepted plans are automatically accepted and become usable by the optimizer if they perform
better than the existing accepted plans.
E.    The non-accepted plans in a SQL plan baseline are automatically evolved, in COMPREHENSIVE mode,
during the nightly maintenance window and a persistent verification report is generated.

Answer: ADE
Explanation:
With adaptive SQL plan management, DBAs no longer have to manually run the verification or evolve process for non-accepted plans. When automatic SQL tuning is in COMPREHENSIVE mode, it runs a verification or evolve process for all SQL statements that have non-accepted plans during the nightly maintenance window. If the non-accepted plan performs better than the existing accepted plan (or plans) in the SQL plan baseline, then the plan is automatically accepted and becomes usable by the optimizer. After the verification is complete, a persistent report is generated detailing how the non-accepted plan performs compared to the accepted plan performance. Because the evolve process is now an AUTOTASK, DBAs can also schedule their own evolve job at end time.
Note:
*The optimizer is able to adapt plans on the fly by predetermining multiple subplans for portions of the
plan.
*Adaptive plans, introduced in Oracle Database 12c, enable the optimizer to defer the final plan decisionfor a statement until execution time. The optimizer instruments its chosen plan (the default plan) withstatistics collectors so that it can detect at runtime, if its cardinality estimates differ greatly from theactual number of rows seen by the operations in the plan. If there is a significant difference, then theplan or a portion of it will be automatically adapted to avoid suboptimal performance on the firstexecution of a SQL statement.

QUESTION 49
You create a new pluggable database, HR_PDB, from the seed database. Which three tablespaces are created by default in HR_PDB?

A.    SYSTEM
B.    SYSAUX
C.    EXAMPLE
D.    UNDO
E.    TEMP
F.    USERS

Answer: ABE
Explanation:
*A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains other user created tablespaces in it.
*
*Oracle Database creates both the SYSTEM and SYSAUX tablespaces as part of every database.
*tablespace_datafile_clauses
Use these clauses to specify attributes for all data files comprising the SYSTEM and SYSAUX tablespaces in the seed PDB.
Incorrect:
Not D:a PDB can not have an undo tablespace. Instead, it uses the undo tablespace belonging to the CDB.
Note:
* Example:
CONN pdb_admin@pdb1
SELECT tablespace_name FROM dba_tablespaces;
TABLESPACE_NAME
——————————
SYSTEM
SYSAUX
TEMP
USERS
SQL>

QUESTION 50
Which two statements are true about variable extent size support for large ASM files?

A.    The metadata used to track extents in SGA is reduced.
B.    Rebalance operations are completed faster than with a fixed extent size
C.    An ASM Instance automatically allocates an appropriate extent size.
D.    Resync operations are completed faster when a disk comes online after being taken offline.
E.    Performance improves in a stretch cluster configuration by reading from a local copy of an extent.

Answer: AC
Explanation:
A:Variable size extents enable support for larger ASM datafiles, reduce SGA memory requirements for very large databases(A), and improve performance for file create and open operations.
C:You don’t have to worry about the sizes; the ASM instance automatically allocates the appropriate extent size.
Note:
*The contents of ASM files are stored in a disk group as a set, or collection, of data extents that are stored on individual disks within disk groups. Each extent resides on an individual disk. Extents consist of one or more allocation units (AU). To accommodate increasingly larger files, ASM uses variable size extents.
*The size of the extent map that defines a file can be smaller by a factor of 8 and 64 depending on the file size. The initial extent size is equal to the allocation unit size and it increases by a factor of 8 and 64 at predefined thresholds. This feature is automatic for newly created and resized datafiles when the disk group compatibility attributes are set to Oracle Release 11 or higher.

If you want to pass the Oracle 12c 1Z0-060 exam sucessfully, recommend to read latest Oracle 12c 1Z0-060 Dumps full version.

http://www.lead2pass.com/1z0-060.html

1 2