Oracle FAQ Last updated: copied over from State, updated with some Primatics work, updated for College Board work and Exadata Major Sections: Oracle Wish List Oracle Corporate/Licensing/Purchasing/Metalink/TAR/CSI Oracle-to-Sybase equivalents SQL*Loader/SQL Loader/exp/imp/export/import/expdp/impdp/Data Pump TOAD/SQL Developer Oracle Enterprise Manager/OEM/Monitoring SQL*Plus/sqlplus/SQL Plus PL/SQL Coding Specific PL/SQL Coding Specific: Dates only/Date Specific Oracle Concepts Streams Specific/Streams Installation/Configuration Theory Administrative/Operations Windows-Specific Administration Questions Space Management/Storage Management/ASM/ Performance/Tuning Initialization Parameters Archive Log/Redo Logs/Logging Backup/Recovery RMAN Specific Security/ Replication/Standby/Data Guard Operations/Data Guard/ Administrative/Management Issues Data Warehousing/Data Warehouse Specific Exadata/Exadata specific Note: searching for two "??" question marks together will find questions that I've either not had time to research, or which I don't know the answer or feel needs further research. Any feedback/additions/suggestions are welcome. Great article on Oracle myths: which are a common theme here. https://richardfoote.wordpress.com/2007/12/12/why-are-there-so-many-oracle-related-myths-the-inconvenient-truth/ Big names in Oracle blogging: Tom Kyte, Jonathan Lewis, Steve Adams, Don Burleson (for the wrong reasons sometimes) Anjo Kolk, Mogens Norgaard, Richard Foote, Tanel Poder, Cary Millsap, Kerry Osborne, Uwe Hesse =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Oracle Wish List: a list of features/improvements we all would like to see in oracle - create unique index with ignore_dup_key option ala Sybase/MS Sql server to quickly remove duplicate records - Named buffer caches to expand the simplistic KEEP, RECYCLE - Better backup documentation; clearly defining what objets have and have not been backed up (explicitly listing objects created with nologging). - Better error logging/descriptions for the various TNS errors encountered. - Remove 30 character limitations!! Table names, object names. Makes conversions to/from databases without these limitations impossible. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Oracle Corporate/Pricing/Licensing/Purchasing/Metalink/TAR/CSI Lots of these links are on www.bossconsulting.com/oracle_dba/tahiti.html --- Q: What is a TAR? A: TAR == Technical Assistance Request: basically a trouble ticket. Note: the word "TAR" is now depricated; a "SR" or Service Request is the new moniker. --- Q: What are the severity definitions while reporting a TAR/SR? A: This is important, because the higher the severity reported, the faster the response and the expectation of resposne from support. Severity 1: a production database is down or severely impacted. Oracle will respond within the hour or handoff to a specialist, and 24x7 resolution is expected on both sides. Severity 2: Severe loss of service in production or a dev/test machine down with no acceptible workaround. Response expected within 4 hours Severity 3: Minor loss of service, workaround exists Severity 4: No loss of service; an informational answer is desired. --- Q: How do you report a problem to Oracle's tech support? A: http://metalink.oracle.com (though, you'll need a "CSI" (Customer support identifier) to continue. Now support.oracle.com (800) 223-1711 is a direct method as well. --- Q: How do you know the details of your Oracle Metalink License? How long does my support last? A: In metalink, if you click on User Profile, then Show License, and drill down the "show" button, the Expiration date for support is listed. It also lists how many users you get in metalink and the support level (Gold, Silver, etc). --- Q: How do I know how many licenses I own of what product? A: call your Oracle sales rep. Have your CSI handy. Oracle can quickly generate a roster of all products owned. --- Q: What are the different Oracle support levels? What do you get with each? A: - Bronze: Metalink login, TAR creation, telephone support 9-5, M-F, Patches, Maintenance bug fixes, etc. - Silver: Everything of Bronze, plus 24x7 Severity 1 response. Scheduled onsite customer support. - Gold: Everything in Silver, plus access to dedicated support technicians only available to Gold members, and a SLA with the client on turn-around, response time, etc. - Platinum; everything in Gold plus remote monitoring and remote patching support Update: as of 10/08 (probably much earlier) the above levels have been removed. Now all users can call in at any time to get support to the former "Silver" level. Assume that gold still exists but in some "primary format." This answer now can be answered somewhat by this link: http://www.oracle.com/support/collateral/oracle-technical-support-policies.pdf --- Q: What website has Oracle binaries for download? A: otn.oracle.com (you must sign in). Then full development only releases are available for every supported platform. Nice. --- Q: How do I know when versions are obsoleted? When is my version EOL'd or end-of-life/end of life? A: Generally, press announcements from Oracle. There is a process that Oracle has been following, to introduce new products and then de-support the old ones in turn, only having 2 active code bases. Metalink Note doc id 148054.1 talks about 8i's extension Main reference document to check: Oracle Database (RDBMS) Releases Support Status Summary [Doc ID 161818.1] --- Q: What is the difference between Premier support and Extended Support? A: see Metalink doc id 971415.1 for support policy documentation. Per the above end of life doc - Premier: allows for new bugs/issues to be fixed (you are expected to be at the latest patch set in order to get one-off's aka Patch Set Exceptions) - Extended: Purchaseable support option which allows one-off fixes for critical issues - Sustaining: Indefinitie assistance with service requests, on a commercially reasonable basis 24x7 --- Q: Do I have to pay for licenses on my dev/test servers? A: Arguable.... for years I thought "no" but then a customer was forced to pay for their dev environments. Links: http://www.oracle.com/technology/software/index.html Text from the top of the page: "Free to download, free to learn, unlimited evaluation All software downloads are free, and each comes with a Development License that allows you to use full versions of the products at no charge while developing and prototyping your applications (or for strictly self-educational purposes). In some cases, certain downloads (such as Beta releases) have licenses with slightly different terms. You can buy products with full-use licenses at any time from the online Store or from your sales representative." The way I read that, it says to me "development licenses are free." However, Oracle has responded in the past to the question as follows; Q: When purchasing Oracle licenses, are dev/test licenses treated differently than production servers? A: Oracle licenses are considered the same for production and development/test. However, customers typically license the Dev/Test servers with Named Users vs Processor based licenses. Other links related to licensing terms: http://www.oracle.com/technetwork/licenses/wls-dev-license-1703567.html http://www.oracle.com/us/corporate/pricing/olsadef-ire-v122304-070549.pdf http://www.oracle.com/us/corporate/pricing/databaselicensing-070584.pdf http://www.jobacle.nl/?p=868 --- Q: Do I have to pay for licenses on my Data Guard standby server? A: Yes. It counts as a production server and needs to be paid for accordingly. --- Q: What is the pricing difference between Standard and Enterprise editions? A: Significant. See Pricing question. Usually Enterprise Edition will run at least 2.5times as expensive. --- Q: What is the feature difference between Enterprise, Standard and Standard One? How about Express Edition? A: http://www.oracle.com/us/products/database/product-editions-066501.html Standard Editions does not have partitioning, certain backup features, transportable Tablespaces, the ability to use certain options in the product like Spatial, most Data Warehousing features, most high-end 11g features. Express Edition is meant for individual development, can only use 1gb of ram and can only support a database of 4gb. Same link for MS Sql server environments: http://www.microsoft.com/sqlserver/2005/en/us/compare-features.aspx --- Q: Are RAC and Spatial extra costs? How about Data Guard? A: RAC and Spatial cost extra, per cpu. Data guard comes free with EE. Apr2014 clarification: RAC is an extra cost for EE but is INCLUDED in SE. Weird. DC gov't definitely was paying for RAC licenses when using it, so it isn't free. --- Q: What is Named User licensing? Does it still exist in 9i and 10g? A: Yes; it exists in all versions. What is it? Instead of getting a "perpetual" license with unlimited users, you can pay by the user. however, different versions have different minumum users. Named User Perpetual = NUP for short. Generally the ratio is 25 to 1 but you can get "deals" in some cases. SE: 5 named users minimum EE: 25 named users minimum plus a per processor fee. Standard Edition http://oraclestore.oracle.com/OA_HTML/ibeCCtpSctDspRte.jsp?section=11365&media=os_user_minimums for a full explanation by product plus a calculator one: 5 named users Also, see this link for a definition of Named User. http://oraclestore.oracle.com/OA_HTML/ibeCCtpSctDspRte.jsp?section=11365&media=os_g_english_help_licensing NUPs are generally used to license non-prod engineered solutions, or to license "lesser" production environments, or to license "pseudo-production" environments that are not customer facing but still in use. Oracle License http://www.oracle.com/us/corporate/pricing/olsadef-ire-v122304-070549.pdf --- Q: How much does Oracle Cost? What is oracle pricing? A: Retail pricing is published, but nearly everyone gets some sort of discount, and those discounts can be negotiated. Per Processor means per CORE if you have dual or quad core cpus (see the Core factors for what you need). Retail Numbers from oraclestore.oracle.com 10/23/08 for 11g. Still accurate 8/4/10, still accurate Oct 2016. DB EE: Perpetual license per processor $47,500 DB EE: Named User $950 per (25 min) $23,750 minimum DB SE: Perpetual license per processor $17,500 DB SE: Named User $350 per (5 min) $1,750 minimum DB SE One: Perpetual license per processor $5,800 DB SE One: Named User $180 per (5 min) $900 minimum --- Q: How does Oracle define named users, per processor, etc? Q: Does Oracle count a "core" as an entire processor? How many licenses do I need on my O/S? Core Factors. A: Depends on the vendor of the chip! See this list for factors. http://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf Roughly; sun/linux is .25, amd/intel.5, some hps, some fujitsus/suns .75, all IBM power chips and modern intel chips = 1.0 See this link at oraclestore for both questions: http://oraclestore.oracle.com/OA_HTML/ibeCCtpSctDspRte.jsp?section=11365&media=os_g_english_help_licensing Rough answer for Processors/Cores Unix: count total number of cores, multiply by factor, round up = # processors example Sun Ultrasparc T1 w/ 6 cores * .25 = 1.5, round up to 2 = #licenses to purchase Intel/Amd: count total number of cores, multiply by .5, round up = # processors to license example: Win2003 server w/ 2 Xeon dual core boxes = 4 cores * .5 = 2 licenses. Note: see DBA_CPU_USAGE_STATISTICS for cpu_count and cpu_core_count parameters to see what a particular database is seemingly "licensed to run" SQL> show parameter cpu; SQL> select * from DBA_CPU_USAGE_STATISTICS order by timestamp desc; --- Q: How do cores count on AWS? How do you license vCPUs on an EC2? A: http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf they work just like Linux licenses: 2 cores/EE cpu license. --- Q: How expensive is Oracle licensing versus its main competitors? A: ?? Need more detailed information. At a high level: o DB2: generally about the same cost o Microsoft: probably 10%-25% of the cost o Mysql/Postgresql: licenses free. --- Q: how much is Oracle support per year? A: Rule of thumb: 25% of the cost of your original license. So if you pay full retail price for an EE license ($47,500) you'll pay about 25% of that per year in support ($11,875). Nobody pays full price though. GSA rates are 40% off the top. Using DC government's 2009 numbers as examples: 1 Enterprise Edition support license (believed 55% discount): $3,742.17/year 1 RAC perpetual license on discount: 3077.12/year 1 Spatial perpetual processer license on discount: $1538.56/year 2007 Invoice example: Retail cost: $8800/year per EE license. 8800/year also for 1-many licenses of Spatial or RAC -- Q: What features of Oracle are included versus what features are extra costs? Is partitioning free? Is Virtual Private Database Fre? A: docs.oracle.com/cd/E11882_01/license.112/e47877.pdf is a link to Oracle Database Licensing Information for 11gR2 dated July 2013. Search for the feature you're interested in. Generally speaking though all features are in Enterprise Edition but may not be in Standard Edition One or Standard Edition. And, some of these are not the same for prior versions. If you have 11g Enterprise Edition, the following major features are included at no extra cost: - Regular Data Guard features - Online Index rebuild/table redefinition - Point in Time recovery - Flashback - SQL Result cache - Virtual Private Database (VPD) - Basic Table Compression - Parallel functionality - CDC - Query Rewrite - Basic and Advanced Replication While, the following features in Enterprise Edition are extra costs - Active Data Guard - RAC One-Node - RAC - AWM - In-memory database cache - Oracle Advance Security - Most "Packs" including Change Management Pack, CM, Diagnostics, Tuning, Real Application Testing - Partitioning - Oracle OLAP - Advanced Analytics - Advanced Compression - Oracle Label Security - Spatial =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Oracle-to-Sybase/MS Sql Server/Microsoft SQL Server equivalents; need to migrate this to rosetta stone... - $ORACLE_HOME == $SYBASE (sometimes $ORACLE_BASE is used) - desc[ribe] == sp_help [tablename] - select * from user_tab_columns where table_name='TABLE'; == sp_help tablename - show user == select user_name() - password == sp_password (pre Oracle 8.0; alter user) - alter table a rename to b OR rename a to b == sp_rename. - select * from v$session == sp_who - select * from user_view == sp_helpcode [view_name] - prompt "text" == print "text" (shows text on the screen) - select * from v$tablespace == select name from sysdatabases - alter session set current_schema=XXX; == "use XXX" --- Q: What are some of the major considerations when doing a Sybase to Oracle migration? A: - pl/sql and t/sql are vastly different; any triggers, stored procs, or functions you've got will probably have to be completely rewritten. - your underlying database storage configuration is about to get a whole lot more complicated. Sybase's "database" is only loosely translatable to an Oracle "tablespace" or "schema" or even "user." - Backups in Oracle are far far more complicated than in Sybase. Oracle depends a lot more on inherent OS backup utilities to back itself up. Oracle's "export" command is roughly equivalent to "dump database" but does not maintain the transaction log chain that dump tran/dump database does. In fact, the transaction log dumps are so tenuously connected that I'd be deathly afraid of a recovery situation in Oracle (knock on wood; havn't had to yet). - the command line tool for oracle (sql*plus) is awful; there's nothing like sqsh ... sqlplus is like a worse version of isql. - There's no easy way to get data OUT of oracle in a readable fashion, as with bcp. You'd basically have to write your own custom scripts to dump data in a comma delmited format (you can see examples of this at www.bossconsulting.com/oracle_dba) Then, from a management perspective (as in, things that appeal to managers): - Oracle has about 40% of the market, Sybase 3%. Its only a matter of time before Sybase gets bought or goes away. - Oracle has a huge established base of customers, there's many active technical lists to subscribe to, there are hundreds of books available, and there's 10 times as many Oracle professionals that you can hire from. - Every product out there works with Oracle; lots don't work w/ Sybase. On the bright side, Oracle had lots of features that Sybase doesn't: some that I use constantly: - materialized views; essentially views that actually create underlying tables of the data. - query rewrite; oracle's optimizer can redirect queries to different objects automatically if it thinks the data is accessible and can be returned faster - star transformation: in data warehouse environments, oracle can alter the way table joins are done to perform them in a more "star schema" mechanism. - bitmap indexes: sybase IQ has them, but Sybase doens't (at least not in versions 11, maybe something recently added). - function based indexes: you can create an index like this: create index xyz on addressbook (upper(lastname)); and then any queries that call "upper" in the predicate can use this index. Very cool; since otherwise upper() would probably result in a table scan. --- Q: What issues will you run into if you want to convert a Microsoft SQL Server to Oracle? A: Here's a quick list of typical issues: - Length of table, column, variable names (oracle limits to 30 chars, microsof does not) - Data type conversion issues (example was varchar(8000) versus varchar2(4000) max in oracle) Microsoft allows nvarchar(max) but you have to specify a length in oracle. - Reserved word issues in code and tables ("text" as a column name is frequently a problem) - Stored procedure conversions vis-à-vis number of rows and binding to variables. - Identities; this is how SQL server autogenerates PKs. No such concept in oracle; you can use sequences but they are not bound to the table create ddl like an identity is. - Indexing: sql server has clustered and non-clusutered indexes that's it. Oracle's concept of a clustered index is rarely used in practice and all its indexes are non-clustered equivalents. This is important when it comes to p&t queries that depend in sql server on the sort order of the data. - User security. Sql server has "logins" that are granted privs on "databases" which have tables. Oracle has "schemas" that are both logins and pseudo databases in and among themselves. - Sp_primarykey, sp_foreignkey issue exists in sql server? Perhaps not. - No delete from table cascade - No create unique index (field) ignore duplicates in oracle. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- SQL*Loader/sqlldr/SQL Loader/exp/imp/export/import/expdp/impdp/Data Pump Answers in rough order of SQL*Loader, exp/imp and Data Pump =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Q: Can you automatically generate skeletal Sqlldr control files? A: I wish. Easiest way to create them is to take output of a desc file of target table: - put "LOAD DATA INTO table tgt_table (" at top of file - :.,$ s/(..*)// this gets rid of the (10) in varchar2(10) - :.,$ s/VARCHAR2/char terminated by '\~'/ - :.,$ s/NUMBER/integer external terminated by '\~'/ - :.,$ s/DATE/date "DD-MON-YY" terminated by '\~'/ - :.,$ s/$/,/ (put a comma at the end of all lines if not there already) - clean the "terminated by" clause from the last column, close w/ ")" and then run the command sqlldr user@sid/password data=datafile control=ctlfile log=logfile (datafile defaults to the ctlfile name) update: see bossconsulting.com/oracle_dba for perl routine that automates this --- Q: What are the main datatypes you typically load data into, and how do you format them within the ctrl file? A: characters, numbers and dates. COMPLAINT_DESCR char, STD_DEPLOYMENT integer, STATUS_DATE date "MM/DD/YYYY", You can also use functions/transformations on the fly. Uppers, to_numbers, etc. --- Q: How do I load decimals, floats, currency, etc? A: Use the "external" numerics. "integer external" works just fine for all these datatypes. --- Q: How do I sql load data into a date field? A: fieldname date "DD-MON-YY" or STATUS_DATE date "MM/DD/YYYY", Configure the date mapping as the incoming data looks. --- Q: What is Sql*loader direct path? A: A pseudo-equivalent to doing a "fast bcp" in Sybase - Set these variables for sql*loader direct = true unrecoverable load truncate sorted indexes (iot_pk) (if loading to an IOT) - You should also turn OFF logging on the table. --- Q: How can I limit the number of rows sqlldr loads in? A: load=X parameter --- Q: How do you sql load in data with line breaks in the row? A: Use the "stream record format" option. This tells sql loader NOT to use "end of line" as the record seperator. You'll have to identify the record seperator. ?? examples. --- Q: What is a control file example for a simple CSV dump from MS Excel? A: options (Direct=true) load data INTO table unittype fields terminated by ',' optionally enclosed by '"' trailing nullcols ( VEHTYPE char , MANPOWER integer external , STD_DEPLOYMENT integer external, ... other columns ) note: this assumes the target table is empty; truncate table beforehand unless you specify append/replace/truncate (see next question). And, if you save as in Excel in to CSV format, you'll have to delete the first line. --- Q: How do you sql load into a table that already has data? Q: I got this error while sql loading: SQL*Loader-601: For INSERT option, table must be empty. Error on table UNITXREF A: use "APPEND" instead of INSERT in ctrl file. example: options (Direct=true) load data append INTO table pcard_data_source fields terminated by "," optionally enclosed by '"' trailing nullcols ( HIERARCHY_LEVEL_1 char, HIERARCHY_LEVEL_2 char, ... You can also use these two options: REPLACE: this does a delete * from table cascade, then inserts from sql loaded file TRUNCATE: truncates table, then re-insert. --- Q: How do you sql load and skip the first row (as you need to if you save-as a XLS spreadsheet to CSV?) A: ??? --- Q: My data isn't truly comma delimited and has some field enclosure characters. How do I handle these cases? A: use optionally enclosed by '"' option. --- Q: How do you sqlload into a table with a clob field (for text fields > 4000 chars?) A: LOBFILE(filename) in ctl file. Need working examples though; seems difficult/clunky to work with.. --- Q: how do I get large character data into a varchar2(4000) field? I keep getting errors. A: in the ctrl file, designate the max size on the char line ... REMARK_TEXT char(4000), ... This should neatly convert the data. --- Q: How can I get command line syntax for imp/exp? Q: How can I get the version of exp or imp binaries? A: exp help=y or imp help=y prints out version of the binary first line, then help parameters. --- Q: Can you do a table-level recovery in Oracle? A: Yes: using an exp dump file and the TABLES option: exp blake/paper FILE=blake.dmp TABLES='dept, manager' ROWS=y COMPRESS=y How about from rman? No, rman cannot do table level recovery; it specializes in doing tablespace recovery. You can set up point in time recovery to recover tables to specific points (like prior to a drop table) but non-logged operations (like insert/appends) won't be there. --- Q: How can you avoid blowing out the rollback segments while doing imp? A: use the "commit=y" option. It will be slower, but won't run out of log --- Q: How can you use exp/imp to unfragment a tablespace? A: export the tablespace full with "compress=n" option, truncate the tablespace, coalese it, then imp with "ignore=y" option. --- Q: What is "Direct path" exp versus "Conventional path" exp? Pros and Cons? A: exp ... -direct=y invokes Direct path export, -direct=n invokes conventional Conventional: uses a SQL statement to retrieve rows targeted for export. Direct Path: data is read directly from disk to the export client Pros: Direct path is MUCH faster. 2-3 times faster since there's no SQL layer and no need to do expression evaluation. Cons: - bug in some exp versions (8.1.7.x, 9.x?) that corrupts the dump file when there are migrated or chained rows. - Cannot use "consistent=y" option, therefore not guaranteeing consistent data - some issues with objects/blobs/clobs. Probably fixed in >8i exp - Only can import to same oracle version w/ same character set? - VPD, label security rulesets are ignored --- Q: What are the default values of "imp/exp" when exporting? (so that you don't have to explicitely set them?) A: compress=y (data is unfragmented and consolidated upon imp) consistent=n (whether or not Oracle "locks" the schemas while exporting) constraints=y (exports table constraints) direct=n (direct path versus conventional path) feedback=0 (feedback on rows exported ... if a number, then message printede very X rows) file=expdat.dmp (filename for export) full=n (does not export full database). If specified, owner/tables are ignored grants=y (exports grant statements) indexes=y (exports indexes) object_consistent=n (same as consistent, but for object-by-object) resumable=n (resumable space allocation) rows=y (exports rows of the table) statistics=estimate (estimate, calculate, none are options) transport_tablespace=n (see section on Transportable Tablespaces) triggers=y (gets triggers) tts_full_check=false (checks IN pointer dependencies) Options assumed blank unless otherwise specified buffer (OS dependent, specifies buffer size to use to retreive rows) filesize (specifies max dump file size before starting a new file) file (used w/ filesize to name the export files) flashback_scn flashback_time help (prints options) log (logs messages to log file as well as screen) owner (specifies the schema/user to export; typically used by DBA logins): do not specify if using "tables" option as well. parfile (specifies parameter file to read in parameters from) query (can limit rows by a query) recordlength (only necessary when exp/imp OS mismatch) resumable_name, resumable_timeout (only w/ resumable) tables: can specify specific tables to export (mutually exclusive of owner) tablespaces: can specifify specific tablespaces to export, instead of schemas userid/password: must be supplied if doing exp non-interactively volsize Imp: many different ones. --- Q: What are some caveats/good things to remember to do post imp? A: - Check imp log: Java table objects may not create automatically - Any view/proc/function will fail if created with hard coded schema names (i.e., create or replace force view oldschema.viewname as ...) They'll be recreated in the current schema with the correct references but will need to be recompiled. The recompile is trivial - Any hardcoded schema names within the code will NOT be fixed automatically, and may cause issues (example: if the new schema has select permission on a third schema, and this third schema is hard coded into a view ... ). - The new user may have to be granted the same select access to third party schema table objects as the orig importer to have views/procs/etc to work. --- Q: Can I specifically skip exporting statistics on tables? I'd like to avoid the common "EXP-00091: Exporting questionable statistics." error A: add "statistics=none" to your exp line. (thanks "Torgeir Toms" from lazydba.com post 4/23/04) --- Q: Is there a "batch" flag or a commit flag in sqlldr? Or does the tool just commit whenever it feels like? A: ?? Seems to commit whenever it reaches some limit, no matter what you put as the rows=X value. --- Q: My export dump files are blowing past the 2gb file size limit on my operating system. How do I dump my schema? A: - exp ehri20test/ehri20test@ehrius tables='emply' file='1.dmp,2.dmp...' filesize=2048m - dump to a pipe, which compresses the data on the fly - dump to a pipe that splits at 2gb file size note; (must be root to mknod). How would you know how big to make the files, except by trial and error? --- Q: Can you predict how big your export file will be? A: Its roughly 10% greater than the num_rows*avg_row_len out of dba_tables Case study: tablex - 1805312: exp dump size - 6291456: Count of segments for the table: - 6291456: count of extents allocated - 1653386: num_rows*avg_row_len in dba_tables --- Q: Can you suppress the "Commit point reached - logical record count XXXXX" messages when doing a sqlldr in? A: ? if there's an option or not: perhaps by re-directing output to /dev/null is only answer --- Q: How can I see the contents of a .dmp file without actually importing it? A: imp ... show=y --- Q: I try to export from a server and I get these messages: EXP-00008: ORACLE error 942 encountered ORA-00942: table or view does not exist EXP-00024: Export views not installed, please notify your DBA EXP-00000: Export terminated unsuccessfully A: Run @$ORACLE_HOME/rdbms/admin/catexp.sql as sys. --- Q: I try to export from a server and I get these error messages: EXP-00056: ORACLE error 31600 encountered ORA-31600: invalid input value EMIT_SCHEMA for parameter NAME in function SET_TRANSFORM_PARAM ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105 ORA-06512: at "SYS.DBMS_METADATA_INT", line 3926 ORA-06512: at "SYS.DBMS_METADATA_INT", line 4050 ORA-06512: at "SYS.DBMS_METADATA", line 836 ORA-06512: at line 1 EXP-00000: Export terminated unsuccessfully A: run @$ORACLE_HOME/rdbms/admin/catpatch.sql as sys (after opening db for migrate) --- Q: Does exp/imp actually import indexes? A: No; it issues a new create index statement on the data post import. --- Q: Can you export just one partition of a table? A: sure: exp user/pwd@sid file='file.dmp' log='log.log' tables='table:partition_name' --- Q: Can I pass a parameter to imp to overwrite existing tables in the schema, if they already exist? A: not in 9i or below. IN 10g: table_exists_action=replace is an option for the "impdp" utility. --- Q: Can you re-direct the tablespace that a table gets created in automatically when importing in a file? A: ?? --- Q: What happens if you export and import a materialized view? A: the export file treats the MV as a materialized view and creates its ddl as such, then exp and imp's the data rows like a table. --- Q: How do I import data into an existing table? Q: I get IMP-00015: following statement failed because the object already exists: How do I import my data? A: ignore=y: it will ignore the failed table create attempt and import the rows to an existing table. --- Q: Can I export and import an entire database? A: Yes. full=y option. exp sys/mgr@sourcedb full=y file=exp1.dmp log=exp.log imp sys/mgr@targetdb full=y file=exp1.dmp log=imp1.log inctype=system imp sys/mgr@targetdb full=y file=exp1.dmp log=imp2.log inctype=restore The first import will create all the users, the second will import the data. --- Q: Do I have to have the touser schema created in order to import? A: yes, if you're cahnging users during the import. Export file created by EXPORT:V10.02.01 via conventional path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set . importing NEWUSER's objects into NEWUSER IMP-00003: ORACLE error 1435 encountered ORA-01435: user does not exist Import terminated successfully with warnings. --- Q: In 10g, what is the "Data Pump" utility? A: the replacement for exp/imp quick and dirty use examples: expdp system/pwd@sid directory=DATA_PUMP_DIR dumpfile=dumpfilename.dmp logfile=logfilename.log schemas=scott expdp user/pwd@sid DUMPFILE=dp_dir:dumpfilename.dmp LOGFILE=dp_log_dir:logfilename.log PARALLEL=6 tables=(a,b,c) JOB_NAME=myjobname impdp dbauser/pwd@sid directory=DATA_PUMP_DIR dumpfile=file.dmp logfile=file.log tables='owner.tablename' remap_schema=oldowner:newowner DATA_PUMP_DIR comes installed by default in $ORACLE_HOME/rdbms/log You can create your own directory like this: create directory dpump_homedir as '/tmp/dumps'; Non dba users must be granted read,write to each directory to be able to create dumps in the directory. Users must also have create table priv in order to create the master job table.. Pros - Faster - compresses on the fly - can run in parallel - server based - more security, more control over who can run what exports Cons to using datapump: - Because its now server based, can't expdp from serverA and impdp into serverB, because the new server can't "see" the datafile sitting on on serverA's unix file system. There is no way around this; you'll have to ftp/cp the file from serverA to serverB, and specifically into a directory the database can see. - takes longer to setup and to execute individual commands. - A regular user has to be granted permissions to read from the directory - You can't just create a quick local exp file. --- Q: How do I grant a non-DBA user access to a directory so they can expdp? A: grant read,write on directory data_pump_dir to bkrepos; --- Q: Is there anyway around the "EXP-00026: conflicting modes specified" error when trying to specify both a user and a list of tables? A: No, even with datapump. Using expdp, you can dump a user excluding tables, then dump a user with a table list to accomplish what you need. --- Q: I'm getting EXP-00003 no storage definition found for segment(0, 0). How do I fix it? A: This looks like a bug in the exp utility. It apparently shows up when you try to export from a higher version (say 11.2.0.3) database than your exp version (11.2.0.1). If you have an empty table, the warning message shows up with numbers (0,0). To workaround the error, some suggestions: - alter table XXX allocate extent; - upgrade client - export as a dba Otherwise this looks harmless and doesn't seem to affect the export. If you get (x,y) instead of (0,0) though, you have a different issue (likely corruption) and should report a SR/TAR. --- Q: What do all the export/exp parameters map to in expdp? A: file:///C:/Oracle/Ora10g_Docs/server.102/b14215/dp_export.htm#i1005864 High Level of common ones: file: dumpfile log: logfile owner: schemas rows: rows=n is now content=metadata_only, rows=y is content=all grants: grants=n is now exclude=grant indexes: indexes=n is now exclude=index statistics: obsolete, always gather stats now consistent: obsoleted basically, see FLASHBACK_SCN, FLASHBACK_TIME These are the same: tables: tables full: full help: help=y --- Q: in data pump, how do you specify the directory of the log/dump files? A: see the dictionary object DATA_PUMP_DIR, created at db creation. SQL> CREATE DIRECTORY dpump_dir1 AS '/home/oracle/dumps'; select * from dba_directories; and then expdp user/pwd DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp LOGFILE=hr.log impdp USERNAME/PASSWORD@ORADEV directory=data_pump_dir dumpfile=dev-snapshot.dmp remap_schema=testuser:USERNAME can also use remap_tablespace=oldts:newts if you want to import into a new Tablespace. --- Q: I'm trying to use datapump and getting the following errors: ORA-39002: invalid operation ORA-39070: Unable to open the log file. ORA-39087: directory name DAILY_EXPDP_DIR is invalid A: You need to have read and write granted to you on the directory grant read,write on directory dp_dir to bosst; --- Q: What is the equivalent of fromuser/touser syntax in old imp tool? A: remap_schema=olduser:newuser example: impdp dboper/dboper@saupa directory=DATA_PUMP_DIR dumpfile=lndg_mom_20130912.dmp \ logfile=lndg_mom_20130912_impdb.log remap_schema=lndg_mom:lndg_gfms --- Q: What is the equivalent of fromuser/touser syntax in old imp tool? A: remap_schema=olduser:newuser =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- TOAD/SQL Developer =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- --- Q: How do I turn on Autocommit in Toad (it defaults to off)? A: View->Options->Oracle-> click the "Commit automatically.." option. --- Q: How do you make Toad shows dates correctly! A: Not really a Toad problem; Oracle defaults to showing dates without the timestamp. See separate question related to "working with dates" in Oracle. Answer used to be setting the NLS_DATE_FORMAT dw@dw30tst> alter session set NLS_DATE_FORMAT = "YYYY/MM/DD" ; dw@dw30tst> select sysdate from dual; 1/12/10 update: View->Options->Data Grids->Data has a user specified date and time format that override the sessions. --- Q: Why are tables that i've dropped still existant in the Toad Schema browser? A: Because the Schema table is only generated upon initial connection. If you break the connection, log back in the table will be gone. --- Q: What are some known bugs in TOAD that you need to be aware of? - Dates are not shown correctly - Triggers will compile with errors but be shown as successful. - previous versions (say 7.6.x) won't work in 10g fully ... they'll see most objects but won't show LOB/CLOB or other 10g specific data types, and will freeze constantly --- Q: How can users see the bodies of packages in SQL Developer? How can userA see the pkg bodies and the text of userB's procedure code? A: create any procedure Obviously, this is a dangerous grant to give, but there is no such thing as "grant select on userA.pkg body to userB;" --- Q: Toad shows (via the moving "green progress bar" in the bottom right) that a statement is still executing. how do I see what this statement is and how do I terminate it? A: (v9.7 and later for sure, unknown about earlier versions): right click on green progress bar, click "Activities" and see what is running. --- Q: While attempting to connect to a database in Toad, I get this error: "Can't initialize OCI. Error -1" What does this mean? A: Make sure oci.dll is in the path, from the same path as the ORACLE_HOME. Often times this happens because you have ORACLE_HOME set to one directory but a different (older?) ORACLE_HOME referenced in the PATH. --- Q: I have the ampersand (&) character in a field and Toad keeps prompting for a substitution variable. How do I fix this? A: View->Toad Options -> Editor->Execution/Compile and click off "Prompt for substitution variables" Have to re-start Toad apparently to get this to work. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Oracle Enterprise Manager/OEM/Monitoring/Grid Control/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- --- Q: I don't have OEM and have to hand monitor/script the monitoring of my alert logs. What is a good list of key words to search for? A: ORA-00600 (major server errors) ORA-07445 (major process errors) ALTER DATABASE CLOSE NORMAL (database shutdown) ALTER DATABASE OPEN (database startup) ORA-01555 (snapshot too old) dbfile create errors: ORA-19504: failed to create file ORA-27040: file create error, unable to create file SVR4 Error: 28: No space left on device archive log errors: ORACLE Instance $ORACLE_SID - Archival Error ORA-16038: log 2 sequence# 861 cannot be archived --- Q: Can OEM v9 manage/monitor previous versions of Oracle? A: Oracle 8 and 8i and above yes, 7.3.x and below, no --- Q: What seems to be the general consensus of OEM among the DBA community? A: - Infrequently used for administration purposes - Events are ok, but custom scripts still preferred as more customized and comprehensive - Statistics and graphical capabilities are great - Job scheduling unreliable - slow application 10g: moved to a webclient, seems to be better received, but the 10g client seems to be missing some of the better process-monitoring capabilities of the standalone. --- Q: How do I set up OEM to monitor my database? What is a good list of events to monitor? A: 8i/9i: never setup, no details. 10g: log into the webbased client --- Q: I changed the pwd of SYSMAN and now my 10g OEM doesnt' work. Why? A: because sysman is the OEM repository user and it needs to be able to log in. How do you change sysman's password? per Metalink Note Doc id 259379.1 - emctl stop dbconsole - emctl status dbconsole (to make sure) - connect / as sysdba and change the pwd - alter user sysman account unlock (if needed) - test login w/ new pwd - cd $ORACLE_HOME/hostname_$ORACLE_SID/sysman/config - cp emoms.properties emoms.properties.bak - vi emoms.properties, edit line oracle.sysman.eml.mntr.emdRepPwd=10d06795dfc60220 and replace encrypted pwd w/ new pwd. oracle.sysman.eml.mntr.emdRepPwdEncrypted=TRUE and replace TRUE with FALSE - emctl start dbconsole - recheck the emoms.properties file to make sure value has been encrypted. --- Q: I can't log into OEM with an "insufficent privileges" error. What roles do I need granted so I can login ? A: Depends on the database version you're logging into: 7.x: SELECT_CATALOG_ROLE must be created: run sc_role.sql as sys, then granted 8.x: grant select_catalog_role to userid; 9i, 10g: grant select any dictionary to userid; --- Q: What is "grid control" in 10g? A: oracle's updated monitoring and tuning tool, now web-based in 10g and very feature rich. --- Q: What is a good strategy for setting up automated monitoring in 10g Grid Control? A: adapted from Chris Foot's dbazine.com blog dated 5/14/06 - confirm email is working properly on your machine - ... ?? continue paraphrasing/adapting from personal work --- Q: What are some best practices for what to monitor on an Oracle database? A: Database Level - Database instance up/down heartbeat monitoring (to include SIDs, ASM and clusterware) - ASM disk group space (replaces current "Available TableSpace Monitor" in SiteScope) - Data Guard; confirming that the apply is not hung, confirming managed standby is operational - RMAN: successful completion, removal of expired and obsolete backups (current SiteScope "Check of hot back-up success") - RMAN: restore database validate commands, report need backup, crosscheck backup commands - Flashback space management; alerts when flashback recovery area hits 85% full - RAC/Clusterware: heartbeat monitoring - General database monitoring: cpu, memory, I/O alerts if thresholds tripped - Audit monitoring; notification for specific auditing events of note - Forced Redo log switching (SiteScope "Check on Hourly Redo Log Switch") - Oracle Listener monitoring (SiteScope "Oracle Listener Check") - Open Processes monitoring (SiteScope "Process Counts") - Session count monitoring (SiteScope "Query of Number of Open Database Sessions" and "Session Counts") - Long running SQL monitoring - Wait event monitoring - Deadlocking and process blocking monitoring Server Level: - Alert log: major oracle errors: ora-600, 7445, 1555 (SiteScope "ODB_ALL_ORACLE_ERRORS") - Alert log: other significant oracle errors (Sitescope "ODB_SPECIFIC_ORACLE_ERRORS") - Alert log: database/instance stop and start - File system space monitoring (SiteScope: "Default_DiskSpace_Utilization_(/root,/var,/usr,/tmp") - CPU monitoring (SiteScope "CPU Monitor") - RAM/Memory usage (SiteScope "Memory Monitor") - Shared Memory monitoring - Swap monitoring - Network I/O monitoring at O/S level - uptime/top monitoring for runaway processes Exadata specific; could all be assumed to be done by Platinum remote monitoring - ILOM monitoring - PDU monitoring - Storage Cell Server monitoring - Admin and Leaf switch monitoring Other current SiteScope Linux O/S checks done in SiteScope - SiteScope "Cron Check" - SiteScope "Heartbeat Monitor" - SiteScope "NTPD Service Monitor" - SiteScope "Open File Check" - SiteScope "Oracle Grid Control agent Check" - SiteScope "Read-Only FileSystem Check" - SiteScope "Syslogd Verification" - SiteScope "Used File Descriptors Count" =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- SQL*Plus/sqlplus/SQL Plus =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Q: How do you log into Oracle from the command line? A: sqlplus user@sid/password or user/password@sid - If TNS cannot resolve the sid, you'll have to do the login manually/ interactively. Either way, the password is seen in cleartext in the process table (ps -fe | grep sqlplus) --- Q: How do you secure the database password when logging in via sqlplus command line? How do you hide the password? When programming? A: (from a discussion on Oracle-L 9/23/03) - from Tom Kyte: create an "identified externally" user. First set init.ora value os_authent_prefix to be "ops$" sql> create user ops$tkyte identified externally; This lets you log in by just typing $ sqlplus / $ sqlplus user@sid then type in password at prompt In scripts you can do this: #!/bin/sh sqlplus -S << EOF username/pwd@instance set echo off ... insert sql here EOF --- Q: What are some examples of features you can do w/ this type of sqlplus program? A: from Oracle-L posting by "Bellow, Bambi" 11/14/03 sqlplus << EOF user/pwd@sid def somevar=`grep -i yourvar $DIR/$FILENAME|awk '{print $NForsomething}'` def somevar2=$SOMETHINGFROMEARLIERINTHISPGM select &somevar*&somevar2 from dual; exit EOF --- Q: I want to get some data out of Oracle in a text, delimited format. A: In a vi window: - In your sql, you'll have to encapsulate each column with || 'sep' || where sep is your seperator (a tilde ~ or comma perhaps). Easiest way is to list all your columns (get output of a desc table and run: :1,$ s/^ // :1,$ s/ *..*$// this leaves one column name per line. Now hit a couple of sed commands in vi: :1,$ s/^/|| /g :1,$ s/$/ || '\~'/g Clean up the first and last lines of your list, then insert the select and from clauses to be what you wish (make sure there's no blank line between the select and the first column name!). You should have sql that resembles: select COL_A || '~' || COL_B || '~' ... || COL_N from table; - Add these lines to the top: set linesize 1000 set pagesize 0 set feedback off set trimspool on set termout off - Add these lines to the bottom: -- Restore default settings set linesize 80 set pagesize 24 set feedback on set termout on - cut-n-paste your target sql to the sql*plus window - file -> spool -> spool file and specify an output file OR in sql*plus: SQL> spool c:\dir\filename - execute your query - file -> spool -> spool off OR SQL> spool off and then get your output file. More frequently, Oracle admins will write small extraction scripts that you can call from sql*plus. They automate this entire sequence rather nicely. --- Q: How do I prevent output to the screen when spooling? A: set termout off. Put this line in your sql script, call it from sqlplus command line (@yourscript.sql) and output is suppressed. --- Q: How do I get clean spool'ed SQL files with no headers, footers, etc? A: set pagesize 0 set termout off set feedback off set timing off set echo off Note: termout off and echo off will effectively turn off outputs working interactivly. --- Q: How do I prevent output to the screen interactively? How can I suppress output of a SQL statement? A: No actual sqlplus setting, but you can set autotrace traceonly; it will execute the query, suppress the results and then print the optimizer plan set termout off does not work interactively. -- Q: How do I turn off "1 row selected" at the end of sql commands? A: set feedback off --- Q: how do you recall the previous command? A: Edit, which brings up an editor. Note; this defaults to "ed" on unix, notepad on PCs. --- Q: how do you suppress "Connected To:" and "Disconnected From.." messages in scripts? A: Not really at a macro level; the best way is to control logging via spool commands inside SQL instead of at the OS level. --- Q: How do you specify what editor "edit" brings up? A: EDITOR environment variable. $ export EDITOR=vi in sh/bash/ksh $ setenv EDITOR vi in csh/tcsh --- Q: how do you clear the buffer? A: del. del # for a line number, del x y to delete from x to y. --- Q: How do you insert an "ampersand" (&) into a string in oracle? Q: How do you query data which has ampersands in the where clause? A: several methods; easiest is: - in sqlplus, set define off before attempting --- Q: What are some good "SET var" commands to work efficiently in sql*plus? A: - set pause on/off == piping output through more in sqsh. - set autocommit on == Autocommit. - set null '*NULL'* == have null values appear as NULL instead of blanks. - set null null (another way) - set heading off == turns off headings - set linesize xxx == linesize (defaults to 80). - set pagesize 0 == page size (defaults to 24) - set feedback off == turns off "# rows returned" message - set trimspool on: trips trailing spaces when spooling to a file - set termout off: turns off echo to screen (when reading from file) ---- Q: How do you pre-set configuration variables for your Oracle session? A: login.sql/glogin.sql; $ORACLE_HOME\dbs\login.sql or $ORACLE_HOME/sqlplus/admin/glogin.sql Note: in 9i, the existance of a glogin.sql prevents booting! Solution is a hack; move the glogin.sql file to glogin_old.sql in your startup/dbstart script. Note: in order to have login.sql be "visible" to all processes when it starts, the $SQLPATH must be set properly. --- Q: How do you "echo" text to the screen? A: dbms_output.put_line Usage: dbms_output.put_line ('hello world '); dbms_output.put_line ('my var is ' || varname); --- Q: How do I print a blank line using dbms_output? A: attempting to print ' ' will be ignored dbms_output.put_line(chr(10)) dbms_output.put_line(chr(13)) or dbms_output.new_line --- Q: What set command do you need to set to use dbms_output? A: set serveroutput on; --- Q: I'm trying to use dbms_output.put_line() and get this error: ORU-10028: line length overflow, limit of 255 chars per line How do I work around this? A: known limitation in dbms_output, best way is to create a wrapping "print_longlines" function that loops through long variables and only prints the first 250 chars. This code pulled from asktom: create or replace procedure p ( p_str in varchar2 ) is l_str long := p_str; begin loop exit when l_str is null; dbms_output.put_line( substr( l_str, 1, 250 ) ); l_str := substr( l_str, 251 ); end loop; end; / --- Q: What if I can't use the above proc and I get an error message like this: ORA-20000: ORU-10027: buffer overflow, limit of 2000 bytes ORA-06512: at "SYS.DBMS_OUTPUT", line 106 ORA-06512: at "SYS.DBMS_OUTPUT", line 65 A: The default buffer size in oracle is 2k, which can be easily overblown if you're printing the output of a looping process. set serveroutput size 1000000 or embed this in your code: DBMS_OUTPUT.DISABLE; DBMS_OUTPUT.ENABLE(1000000); the maximum buffer size is 1M (or 1000000 in the examples above). 10g: you can set size to be unlmited. set serveroutput on size unlimite --- Q: How do you fix the backspace character when using the bash shell in sql*plus on Unix machines? A: This seems to be a bash shell problem. Works in sh, tcsh $ export TERM=vt100 $ export ORACLE_TERM=vt100 $ stty erase (control-H) you can do this from sqlplus and it works: sql> !stty erase ^H None of these fixes the backspace character returning "^H" in sqlplus. However, after dropping to sh from bash, stty erase and then exiting, it works fine? Very odd. Repeatable? --- Q: I just got Cannot create save file "afiedt.buf" in sqlplus when editing. How do I fix this? A: there is a file caled afiedit.buf in the current directory that you do not own or cannot overwrite. Remove it. --- Q: How do you execute shell commands in sqlplus? A: host or ! SQL> host ls -l SQL> ! ls --- Q: Which is faster, count(1), count(*) or count(columnname)? Or, how about count(rowid)? A: A common Oracle myth apparently, that one is faster than the other. A myth now because of previous behavior. - PRE v8.0, count(*) was known to be faster than count(1). Mythical Claims on discussion boards/email groups over the years: - Apparently, the OCP DBA exam says that count(column) is fastest, followed by count(1). The count(*) requires a bit of extra computing time to resolve the "*" into "all columns." Is this true? - Oracle has done kernel tuning specifically for count(*)? - count(rowid) proved to be slower by about 2.8% in large scale tests? - count(columnname) will never be as fast. However: - asktom's site says count(1) and count(*) are the same and has stats to prove it. Tests show the exact same plans, same cost. - 9/7/07: breakdown of counting and explain plan costs create table test as select * from dba_objects; count(*) of test: 0.13seconds, FTS of test, 301 cost count(1) of test: 0.04seconds, FTS of test, 301 cost count(rowid) of test: 0.03seconds, FTS of test, 301 cost count(object_name) of test: 0.03seconds, FTS of test, 301 cost 2nd round: count(*) of test: 0.02seconds count(1) of test: 0.03seconds count(rowid) of test: 0.03seconds count(object_name) of test: 0.03seconds so basically: once the table is cached, the times are identical. --- Q: I'm getting a ORA-06401: NETCMN... error when logging into a server. Whats wrong? A: More than likely, an incorrectly formatted tnsnames.ora file. Check for ^M characters, mistakes in the file, etc. --- Q: What's the easiest way to make a comma-delimited file (.csv) from my table's worth of data? A: Several methods 1. Pure pl/sql method: SQL> set colsep ',' SQL> spool filename.csv SQL> select * from table; SQL> spool off 2. Write a quick perl routine ... see run.pl routine for examples. 3. use the utl_file package ... need example?? --- Q: I don't have set autocommit on in my sqlplus window. What happens when I exit the session? A: sqlplus won't commit any DML operations while you're still logged in until you explicitly hit "commit;" but will automatically commit all actions when you exit the sqlplus window. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- PL/SQL Coding Specific =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Q: What does the PL in PL/SQL stand for? A: Procedural Language Q: How do you run scripts from command line? A: sqlplus user@sid/pwd @script.sql --- Q: How can you limit the number of rows being returned? Ala set rowcount x? How do I get the first N rows from a table? A: (there is no built in "set rowcount N" like sybase). Update: SELECT * FROM (SELECT empno FROM emp ORDER BY empno) WHERE ROWNUM < 11; A better? option is to set a pagesize and set pause on in sql*plus, select your data and page through. When you've seen enough, hit ctrl-c and you can stop retrieving results. --- Q: how do get a list of objects? Tables, procs, triggers? A: (Thanks Danielle Lemos dlemos@goldencross.com.br for some of these) select table_name from user_tables; select view_name from user_views; select object_name from user_objects where type='PROCEDURE' (or PACKAGE or FUNCTION for these types of db objects) select sequence_name from user_sequences; select index_name from user_indexes; user_ views only show those objects by the user running all_ views will show all objects by all users that you have select privs for dba_ views show everything dba_mviews and dba_snapshots show different takes of materialized views. --- Q: How do you get a list of columns per table (ala sp_help or select * from table where 1=2 in Sybase/MS-Sql server)? A: select column_name from user_tab_columns where table_name='TABLE'; --- Q: Is there an auto-increment/auto increment/identity concept in Oracle? A: Prior to 12c, no. You'd have to use sequences to keep track of PKs. In 12c yes: Great write up here: https://oracle-base.com/articles/12c/identity-columns-in-oracle-12cr1 --- Q: How do you work with Sequences? A: Create them as such create seq_name start with X increment by 1; insert into table values (seq_name.nextval, ...); To get current value of a sequence: select seq_name.currval from dual; However; a sequence must be "current" in the session in order to be able to select from it. That means, you must have "initialized" the sequence in the session in question; which means, you must have used the seq_name.nextval value. --- Q: what is the default cache for sequences? A: 20. If you do not specify a cache/nocache value then Oracle caches 20 numbers that are burned if a database instance ends unexpectedly. You can lower this number to as few as two but it will impact performance in some way. http://download.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_6015.htm#SQLRF01314 --- Q: How do you assign a sequence number to a variable in oracle? A: select seq_name.nextval into varname from dual; (where of course varname is previously declared) --- Q: How do you populate a sequence with a particular number? A: You can't: you must drop it and recreate it: drop sequence seq_name; create sequence seq_name start with N; --- Q: How can I get a list of all my sequences and their current values? A: select sequence_name, last_number from user_sequences; --- Q: How do you get a sequence to be "current" in your session, so that you can see what its current value is without burning a new value? (i.e., why can't I type 'select sequence.currval from dual' in my sqlplus window A: You cannot. Per Oracle manual: "CURRVAL can only be used if seq_name.NEXTVAL has been referenced in the current user session (in the current or a previous transaction)." --- Q: Can Sequences ever get duplicate values? A: NO, even in parallel/RAC environments. Gaps can occur from rollbacks or from shutdowns (just as with Identities in Sybase) but the gaps are limited. Or when the cached sequence values age out of cache. You can eliminate these gaps by marking them as "nocache" or by "pinning" them to cache, preventing them from aging out. --- Q: how do I debug SQL that i've written that isn't compiling correctly? A: after running, do SQL> show errors; and you'll get line numbers (approximately) of problem lines. The line count given will not include any part of the "create procedure/trigger" statement if you're calling the SQL from such. line by line: dbms_output.put (x||' '||y); Note: Toad has a pretty nice little sql procedure compiler, that will stop at the specific line causing errors, instead of depending on "show errors" (which actually interprets information from a system view user_errors; --- Q: You apparently cannot do a join in an update statement clause. How do you then recode an update statement? How do you do an join in update? Join during Update? A: an in clause. Examples: Sybase/MS Sql server Example 1: Update table1 From table1 t1, table2 t2 Set t1.col = t2.col Where t1.id = t2.id; becomes Update table1 t1 Set col = (Select col from table2 where id=t1.id); Example 2: update tableA set col=x from tableA, tableB where tableA.id = TableB.id and TableB.id = "123" becomes update tableA set col=x where id in (select tableA.id from tableA, tableB where tableA.id = TableB.id and TableB.id = "123") Example 3: update tableA set a.col1 = b.col1, a.col2 = b.col2 from tableA a, tableB b where tableA.id = tableB.id and TableB.id = '123' becomes update tableA set (col1, col2) = (select col1, col2 from TableB where id = '123') where id = '123' or update dim_fund_grp fg set (bgng_bdgt_fy_nbr, endg_bdgt_fy_nbr) = (select bgng_bdgt_fy_nbr, endg_bdgt_fy_nbr from dim_fund where fund_key = fg.fund_key); However, you cannot reference :new or :old in this subquery. You'll have to assign the targeted column to a variable instead (if you're using triggers) An example of how to update several fields within a join at once from asktom: update (select t10.col1 a, t10.col2 b, t10.col3 c, t2.col1 d, t2.col2 e, t2.col3 f from t10, t2 where t10.col1=t2.col1) set b=e, c=f; If you don't have PKs on both, you'll get an error though: ORA-01779: cannot modify a column which maps to a non key-preserved table --- Q: How do you concatenate two fields or two strings in Oracle? A: using the || operator. eg. select 'aaa' || 'bbb' from dual; returns "aaabbb" --- Q: How do you assign a variable in a procedure/trigger? A: := sign. However, you cannot do: new_var := select var from table where col1 = x; instead: select var into new_var from table where col1 = x; --- Q: How do you work with functions? A: Create them, then call them. User defined functions are very nice. Simple Example: CREATE OR REPLACE FUNCTION fn_checknull (sPARM1 integer) RETURN NUMBER IS sR2 integer:= sParm1; BEGIN /* No data reported */ if sR2 is null then sR3:=0 ; elsif sR2 = -1 then sR3:=0; end if; RETURN sR3; END fn_checknull; / --- Q: Do Oracle temp tables work the same way as #tables in Sybase/MS Sql server? A: More or less, yes. You can create a temp table (see below for usage) and rows inserted there stay for the duration of the transaction. Only difference seems to be, when you exit Sybase/MS the #table goes away, where as in Oracle it stays persistent but loses its data. --- Q: How do you create and use temp tables? A: create global temporary table test (col1 integer, col2 varchar2(5)) then, insert into the table as normal. however, nobody (not even system) can see the data in the table except the user who created and owns the table. --- Q: How do you alter a database to use a new/different temporary tablespace? or, How do you move to a new temporary tablespace? A: Best way: - create new temporary tablespace in new location, like this: sql> CREATE TEMPORARY TABLESPACE temp2 TEMPFILE '/test01/oradata/EHRISTG/temp02.dbf' SIZE 2048m; - alter every user to use the new temp ts: use something like this: SELECT 'alter user '|| username || ' temporary tablespace TEMP2;' FROM DBA_USERS; - make the new temp Ts the default: sql> alter database default temporary tablespace temp2; - drop old temp tablespace sql> drop tablespace temp including contents and datafiles; --- Q: How do you tell what the current default temporary tablespace is? A: select * from database_properties; --- Q: How do you work with long variables doing insert into select from? A: you can't manipulate long variables this way. You have to use a third party app and bind the long variable to a local var to do the insertion --- Q: Is the between function inclusive of the two parameters used? A: yes it is. select * from table where x between 1 and 2 WILL get all records that are 1 or 2. --- Q: How do you do case insensitive searching? A: - select * from table where upper(column) = 'PATTERN' but this will table scan because you can't have an index on upper(column) - add redundant colums for search fields, upper them on insert and index. - Oracle 8i (8.1) allows it: use function-based indexes. Must set a couple of system parameters, then normal indexes will function efficiently even in the midst of a function like upper, lower, etc. This eliminates the table scan issue in previous versions --- Q: What is syntax for inner and outer joins in Oracle (akin to *= in MSsql and Sybase)? A: the "(+) =" operator. However, the tables are put in "backwards" order. For example: you want to outer join tableB to tableA on column123. Your syntax would be select tableA.whatever, tableB.whatever from tableA, tableB where tableB.column123 (+) = tableA.column123 In Sybase/MS Sql server this would read: select tableA.whatever, tableB.whatever from tableA, tableB where tableA.column123 *= tableB.column123 However, "where outer (+)= inner" should be equivalent to "where inner =(+) outer" NOTE: see next question! --- Q: What is the difference between an inner join and an outer join? A: "inner join" is basically a regular or "full" join, returning exactly the matching records on the join condition between tables. "outer join" allows records to be returned from joined tables where the join condition fails, to show that records exist in one part of the join condition but not the other. --- Q: What is a left join or a right join really? Aren't they just outer joins? A: Essentially, yes. A left join basically says, show me all the records that satisfy the regular join PLUS any records on the "left" table that dont' satisfy the join condition. Basically you're outer joining the "left" table in the syntax. Here's a nice overview; mssql server based but the logic is right. http://www.wellho.net/mouth/158_MySQL-LEFT-JOIN-and-RIGHT-JOIN-INNER-JOIN-and-OUTER-JOIN.html --- Q: What is a quick introduction to outer joins? A: Run this sql: drop table t1 drop table t2 create table t1 (col1 int, col2 varchar(5)) create table t2 (col1 int, col2 varchar(5)) insert into t1 values (1,'abc') insert into t1 values (2,'def') insert into t1 values (3,'ghi') insert into t1 values (4,'jkl') insert into t2 values (1,'mno') insert into t2 values (2,'pqr') insert into t2 values (5,'stu') select * from t1 select * from t2 -- to delete from table1 where records are not in table 2 delete from t1 where t1.col1 in (select t1.col1 from t1 left outer join t2 on t1.col1 = t2.col1 where t2.col1 is null) -- to delete from table2 where records are not in table 1 delete from t2 where t2.col1 in (select t2.col1 from t2 left outer join t1 on t1.col1 = t2.col1 where t1.col1 is null) ----- Q: How do I split a string? A: a combo of instr() and substr() functions these commands will split an email address on the "@" sign: email_post := substr(lower(:new.email),instr(:new.email,'@',1)+1); email_pre := substr(lower(:new.email),0,instr(:new.email,'@',1)-1); these commands will split a time field (i.e. 9:45) on the ":" select substr('s.start_time',0,instr('s.start_time',':',1)-1)) from dual; select substr('s.start_time',instr('s.start_time',':',1)+1)) from dual; these commands will split up an address: ?? these commands will split up (reasonably well) a single string name field: ?? --- Q: How do you get a string with the first character upppercase and the rest lower? A: select upper(substr('abcde',1,1)) || lower(substr('abcde',2,255)) from dual; --- Q: What is the syntax of instr() function and substr() functions? A: instr simple syntax: instr('string',substring to search for,starting position #,occurrence #); So, in this example; select instr('abc,def,ghi,jkl',',',1,3) from dual; will return 12, meaning that the 3rd occurence of the search string "," starting at the first charcter of the string will happen at the 12th position of the string. Substr simple syntax: substr('string',starting position #, length of string to grab) Example: select substr('abcdefghijklmnop',5,5) from dual; returns 'efghi' meaning that the 5 characters starting at the 5th position were efghi. --- Q: How do you read backwards from the end of a string when splitting it? A: use negative starting number in substr. Ex: to get the LAST four characters of a string, do this: select substr('abcdefg',-4,4) from dual; this will return 'defg' --- Q: How do you convert integers to strings (aka str() function in ms-sql)? A: to_char() --- Q: What does ltrim, rtrim do? A: trims spaces from the left and right of strings. --- Q: What does rpad do? A: pads the right side of a field with a specified character. --- Q: Can you do cross-tablespace/instance joins? A: Definitly. select m.p_id, c.lastname from ejp.membership m, mei.constit c where m.member_id = c.constit_id and c.constit_id = '0000026'; --- Q: How do you find all the empty values for a column? A: select * from person where (last_nm is NULL or last_nm eq ' ') (one space between single quotes). Oracle stores blank values in some character fields as single spaces. --- Q: How do you append a string of text to a column (a varchar2 column?) A: || ex: update boss_test3 set col2 = col2 || 'fgh' where col1=1; --- Q: How can I get rid of duplicate rows in my table? What is the FASTEST way to get rid of duplicate rows? How do I find duplicate records? A: From Oracle-L discussion 9/19/03, 8/5/04, multiple posts, multiple discussions 1. delete from table where rowid not in (select max(rowid) from table group by PK); sql only solution, not really feasible in huge environments. In fact, attempts to run this on very large tables resulted in 2 days of solid processing. 1a. slight variation DELETE table t1 WHERE EXISTS (SELECT 1 FROM table t2 WHERE t2.id = t1.id AND t2.rowid > t1.rowid) 1b. Yet another solution delete from emp where rowid in (select rowid from emp minus select min(rowid) from emp group by id); 2. Alter table mytab enable constraint PK exceptions into exceptions; Better way; much faster for large tables, lets you audit the duplicate rows by examining exceptions table. (you must run $ORACLE_HOME/rdbms/admin/utlexcpt.sql before doing this). Con: the exceptions table will contain BOTH duplicate rows in the source table ... you'll have to delete them manually. 3. Write a cursor; sql coding solution ... probably doesn't give you anything more than what option 2 provides. 4. select distinct into newtable, drop old table, rename new table (or delete from old table, select into from new). Create table as select (CTAS) has to be the fastest way. 5. If the duplicates are rare, do a group by clause where count(*) > 1, delete by rowid. Perhaps the purest fastest way is to use unix sort/unique commands: a. sqlload data out or select out delimited b. sort filename | uniq > new file c. sqlload back in. 6. Tom Kyte published this method using analytical functions; this worked the best with a massively large table. delete from table t where rowid in (select rid from (select rowid rid, row_number() over (partition by id order by rowid ) rn from t ) where rn <> 1 ) / --- Q: Which is the best method of "removing" rows from a result set? "minus," a "not in" clause, or "where not exists?" A: It depends (discussion on Oracle-L 9/21/03). Note that not in is not equivalent to a "not exists" query if null records are involved. - "not in" with a hash_aj hint if the subquery is significantly less "costly" (in terms of physical i/o) then the outer query. - "where not exists" if the subquery is relatively close in cost to the outer query. If indexes are involved, and if nested_loops are used to join rather than sort/merge joins, this method may be best. - "minus" is probably the easiest to code, but could result in larger sort_area_sizes. 11/09 update: a "not in" query ran for 17 hours, but came back in about 10 seconds using minus when selecting across a db link. select key_id from edw_iq.fact_iq_workflows where key_id not in (select key_id from iq.iq_workflows@dcsappp); versus select key_id from edw_iq.fact_iq_workflows minus select key_id from iq.iq_workflows@dcsappp; --- Q: What are the limitations of a "not in" query A: If ANY of the rows in the not in sub query are null, it will invalidate the sub result set and will return 0 rows. --- Q: What is the difference between varchar() and varchar2()? Why is there a varchar2() in Oracle? A: - varchar() is solely for backwards compatibility to its inception in Oracle 6. Its inclusion is not guaranteed to be supported in the future by Oracle. Apparently it corresponds to an ansi-sql varchar. - varchar2() was created in Oracle 7 to actually hold variable length data. Initially, oracle's varchar2() WAS called "char" but then the Ansi standards document came out, and oracle renamed char() to varchar2() and created a "new" char() data type of fixed length. varchar2() s the equivalent to Sybase/MS-sql server varchar() datatypes. In 9i, table creates attempted on varchar() fields are automatically converted to varchar2() columns. Oracle treats char() and varchar2() variables the exact same while processing, debunking an Oracle Myth that char() datatypes are more efficient than varchar2() --- Q: Is there an Oracle equivalent to isnumeric() or isdate() in Sybase? A: No, but you can write your own functions pretty easily. ?? need examples --- Q: Is there an equivalent to sybase's "select into" method of creating a table? A: Yes: create table as select (CTAS for shorthand)... --- Q: What is a deterministic function versus a non-deterministic function? A: ??? --- Q: Is there a storage difference between a numeric(7,2) and numeric(22,7)? A: No, the storage impact depends on the amount of data actually inserted into the fields. --- Q: is there a sybase equivalent to sp_sybsyntax? Can you get the syntax of commands online, without having to go to the manuals? A: No; the syntax is typically far to complex to print out. Even the Oracle manuals have to break syntax into several sections. You can get Oracle error messages from command line like this: % oerr ora 600 --- Q: How do you insert/update/work with BLOB or CLOB data? A: dbms_lob package. --- Q: What is a "hash function?" A: ??? --- Q: Are results guaranteed to always be in the same order, when querying data from Oracle, without explicitely using an Order by? A: the official party line is No. Rownums can be migrated, parallel reads on the table can mix up the results. You shouldn't trust the inserted order of the data. --- Q: How do I pull row number "N" from a result set? How do I get the nth row from a query? A: Two methods, one pl/sql specific, the other not (from oracle-L discussion 10/21/03) - SELECT cola, colb, colc ... FROM (SELECT cola, colb, colc ..., ROWNUM r FROM WHERE ...) WHERE r = 2; Notes: oracle specific, and as noted elsewhere, you're not always guaranteed to get the same row back each time (unless you order the results first). - SELECT a.cola, a.colb, a.colc ... FROM ( SELECT ROW_NUMBER() OVER (cola) AS row_number, cola, colb, colc ... FROM [where ...] ) a WHERE a.row_number = 2; Notes: this should be vendor/sql implementation specific. --- Q: What does the NVL() function do? A: Returns a value in place of null, if null is returned. select nvl(to_char(cola),'not applicable') from table; --- q: What is the syntax of an if-then-else statement? A: DECLARE grade CHAR(1); BEGIN grade := 'B'; IF grade = 'A' THEN DBMS_OUTPUT.PUT_LINE('Excellent'); ELSIF grade = 'B' THEN DBMS_OUTPUT.PUT_LINE('Very Good'); ELSIF grade = 'C' THEN DBMS_OUTPUT.PUT_LINE('Good'); ELSIF grade = 'D' THEN DBMS_OUTPUT. PUT_LINE('Fair'); ELSIF grade = 'F' THEN DBMS_OUTPUT.PUT_LINE('Poor'); ELSE DBMS_OUTPUT.PUT_LINE('No such grade'); END IF; ENd; / --- Q: What is the syntax of the decode() function A: (from oracle manual) decode (column, value1, 'Result1', value2, 'Result2', ... , 'Default'); Its basically a nifty inline if-then-else clause. example: select decode (dt_key,99999,21002131,dt_key) from dt@r2lookup where dt_key in (1275,99999) returns 1275 for the first, and 21001231 for the second. --- Q: What is the cast function? A: basically a converter function from one datatype to another. Similar to to_char() but works generically. sql> select cast('1997-10-22' as date) from dual; --- Q: What is the maximum number of table joins allowed in Oracle, by version? A: ?? I don't believe there IS a maximum number of table joins... --- Q: Which will be faster, a query with "is null" or "is not null?" A: If there's an index on the field, then "is not null" will be much faster. Nulls are not stored in the database and are "non existant" values, thus a table scan would be required to read "is null" queries. Without an index, both will table scan and be the same performance time. --- Q: How can I create a skeletal version of another table? A: create table X_new as select * from X where 1=2; --- Q: What is faster, deleting and inserting, update in place or merge? A: from discussions on oracle-L 2/06 Note: does not take into account CTAS operations Generally, update in place will be faster than delete/insert. Reasoning: - undo work for an update will be less than for deletes and inserts. Undo holds just the changed information for a rowid for updates, while it must put the entire row in for a delete. - deletes and inserts generate more redo than updates (assertion is that deletes/reinserts could be 4times as expensive as an update) - Additional network traffic if the delete/insert are executed from an app. - Delete/insert modifies every index on a table, whereas updates only modify the indexes affected by the columns being updated. --- Q: What are pl/sql operations that will force Full Table Scans unknowingly? A: ?? many more - any "is null" operations, since nulls aren't stored in indexes - forcing a to_character() or to_numeric() conversion on the fly (say if you select * from table where year=2005 but the year field is defined varchar2(4)). --- Q: How can I select a set of random rows from a table? A: from 7/1/04 lazydba post by "Seefelt Beth" SELECT crnt_rfrnc_key, cd FROM (SELECT dbms_random.VALUE, crnt_rfrnc_key, cd FROM CRNT_RFRNC_DATA ORDER BY 1) WHERE ROWNUM < 11; --- Q: What are the analytical functions (analytics) available to the programmer? A: full list from 10g manual: ?? actual uses/defintionso AVG * CORR * COVAR_POP * COVAR_SAMP * COUNT * CUME_DIST DENSE_RANK FIRST FIRST_VALUE * LAG LAST LAST_VALUE * LEAD MAX * MIN * NTILE PERCENT_RANK PERCENTILE_CONT PERCENTILE_DISC RANK RATIO_TO_REPORT REGR_ (Linear Regression) Functions * ROW_NUMBER STDDEV * STDDEV_POP * STDDEV_SAMP * SUM * VAR_POP * VAR_SAMP * VARIANCE * --- Q: Whats a quick query to count the number of occurences of a character in a string? A: from Oracle-L conversations 8/5/04, calculate the length of the string and then subtract the length after removing all the specific characters. This will return a count of the number of commas in the given string. SELECT LENGTH('ab,cde,df,efg,geg,d') - LENGTH(REPLACE('ab,cde,df,efg,geg,d',',',NULL)) as count FROM DUAL; --- Q: How do I get the length of a varchar field? How do I get the max length of a field in a table? A: SQL> select length(col1) from test2 where pk=123; SQL> select max(length(col1)) from test2; --- Q: How do you round numbers to the nearest "large" value (say, the nearest 1000)? A: Several options - use ceil(): select 1000*ceil(12345678/1000) from dual; - use round(): select round(12345678/1000)*1000 from dual; - round() using positional parameter: select round(12345678, -3) from dual - use floor() and mod() together: --- Q: What is the translate() function and what does it do for you? A: It basically provides a quick character filter that can be used on strings examples (from Oracle docs) /* this stars out the password ... but wouldn't star out numbers */ select translate('MyPassword', 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', '****************************************************') as newtxt from dual; /* this would star out a ssn but retain the dashes... */ select translate('123-45-6789', '0123456789', 'xxxxxxxxxx') as newtxt from dual /* this lower-cases text */ select translate('ALLCAPS TEXT','ABCDEFGHIJKLMNOPQRSTUVXYZ','abcdefghijklmnopqrstuvxyz') from dual; ---- Q: How do you declare variables at the sql*plus level, and get them to stay persistent? A: sql> column parm_value NEW_VALUE post_code sql> select parm_value from table where blah; Use &post_code in the remainder scripts and it works --- Q: What is the best way to update millions of records in a table? A: don't do it :-). Seriously, updates are going to be very time consuming. (some notes from 9/1/05 Oracle-L thread) Alternatives - create table newtable as from old_table - insert /* +append */ into new table nologging. Or use insert /*+ append parallel (tablename,12) */ - use commit_every() function If you absolutely must do the operation, do everything you can do to eliminate logging: 1) set all indexes to unusable (except indexes that you'll use during the update/delete operations) 2) Grow your undo tablespace for approximately 300MB 3) Bring database no NOARCHIVELOG mode if you can. 4) Grow buffer pool to 300mb to accomodate all those blocks from both indexes and - if partitioned, update by partition. use a small loop. - increase "parallel query slaves" to maximum, issue straight sql update - use pl/sql with infrequent commits (twice as slow as straight sql?) - use nologging for faster performance (immediately backup the datafile when done if you do this). --- Q: What is the best way to delete millions of records in a table? A: (some notes from Oracle-L thread 9/1/05) - instead of deleting, CTAS a new table and commit_every() like this: begin commit_every(100); create table t as select * from source where 4_num < 20040101 AND 4_num > 20041231; end; If you're keeping any sizeable fraction (up to 1/2 of total records) this is the way togo. If you have the space to make a full copy of the data, there's no question CTAS is the way to go. - See above answer for altering objects to reduce logging. - Next time, use partitioning :-) - Use parallel DML if you *have* to do the deletes - Interesting trick: create a MV with the records that will be left behind post delete, then create public synonym, perform underlying maintenance, drop synonym to MV afterward. Guarantees most uptime. (Trick from guru Tom Kyte). --- Q: How do you do modulo operations in Oracle pl/sql? A: mod() SQL> select mod(10,3) from dual; or within pl/sql if mod(lc, 50) = 0 then commit; end if; --- Q: What is a "fetch across commit" and why is it bad? A; From an oracle-L posting 5/2/2000 by Dave Wotton "Fetch across commit" is a programming technique where you open a cursor, fetch rows from it, update the rows and commit the updates within the cursor loop. ie. your cursor loop is open (and fetching) across one or more commits. Why is it generally bad? Because it often leads to Ora-1555 ora "snapshot too old" errors when you use this technique on a sufficiently large table. Oracle must keep a consistent view of the original data, but you're constantly updating it. Eventually, you'll run out of undospace (if you don't have a massive amount defined) and the job will fail. Solutions are to do mass updating in batches, without the use of an update cursor. Or, if the cursor is mandatory, build in logic to commit every X number of rows and then CLOSE THE CURSOR and reopen it. Even using the same code, at worst case the next cursor won't return nearly as many rows, and you can repeat the behavior until all rows are updated. --- Q: What is a pivot query? A: A query where you take the results of a particular column of a conventional query and turn them into columns themselves. asktom has a simple example where a decode is used with a group-by subselect. More complicated examples, involving an unknown data set, are possible but involve pretty serious sql. apparently the listfunction() in ms sql server does this automatically. --- Q; How can I limit my query to one particular partition? A: select count(*) from f_trng partition (p3); --- Q: What are some common Oracle developer mistakes when writing pl/sql code? A: (pulled from various sources) - Doesn't take advantage of BIND Variables - Doesn't make use of generic error handling routines. - Performs table joins based on MASSIVE driving tables. - Doesn't use CURSOR FOR LOOPs. - Contains inefficient control loop statements. - Unintentionally performs implicit datatype conversion. - Code that doesn't take advantage of the ability of some databases to "pin" frequently used procedures/packages/functions in memory. - Code that incorrectly sizes datatypes. --- Q: How do you get mean (average) and median of a data set? A: mean/avg: select avg(count) from t1; median: select percentile_disc (0.5) within group (order by count) from t1 --- Q: What is the syntax in Oracle for Cursors? How do I write a cursor? A: Generically: declare v_paygroup varchar2(3); cursor c_get_paygroups is select distinct paygroup from edw_ps.pay_periods_dim payperiod order by 1; begin open c_get_paygroups; loop fetch c_get_paygroups into v_paygroup; exit when c_get_paygroups%NOTFOUND; insert into paygroup_lasttwo (paygroup) values (v_paygroup); end loop; close c_get_paygroups; commit; -- or commit inside. end; Here's another method using ROWTYPE to hold the returning variables. declare v_paygroup varchar2(3); cursor c_get_paygroups is select distinct paygroup from edw_ps.pay_periods_dim payperiod order by 1; v_get_paygroups c_get_paygroups%ROWTYPE; begin open c_get_paygroups; loop fetch c_get_paygroups into v_get_paygroups; --fetch c_get_paygroups into v_paygroup; exit when c_get_paygroups%NOTFOUND; v_paygroup := v_get_paygroups.paygroup; insert into paygroup_lasttwo (paygroup) values (v_paygroup); end loop; close c_get_paygroups; commit; -- or commit inside. end; DON'T forget Begin and end! --- Q: What is the Cursor: pin S wait on X wait event? A: http://www.pythian.com/blog/cursor-pin-s-wait-on-x-in-the-top-5-wait-events/ Cursor or Mutex contention. The wait event is telling us that an exclusively held cursor is waiting to be put into shared mode. --- Q: Are nulls in numeric fields treated the same as zeros? A: NO, when doing averages or medians over a set of numbers, null records are ignored completely from all calculations. Example sql to demonstrate: drop table number_test; create table number_test (col1 varchar2(5),col2 integer); insert into number_test values ('a',10); insert into number_test values ('b',5); insert into number_test values ('c',1); insert into number_test values ('d',null); insert into number_test values ('e',50); insert into number_test values ('f',0); insert into number_test values ('g',null); insert into number_test values ('h',25); insert into number_test values ('i',0); select * from number_test; select sum(col2) from number_test; -- 91 select avg(col2) from number_test; -- 13 (which is 91 divided by 7) select percentile_disc (0.5) within group (order by col2) from number_test; -- Median: 5 update number_test set col2=0 where col2 is null; select avg(col2) from number_test; -- 10.11 (which is 91 divided by 9) select percentile_disc (0.5) within group (order by col2) from number_test; -- Median: 1 --- Q: What is the difference between the various numeric data types (number, integer, decimal, float)? A: ?? not complete, need examples of what happens to integers/smallints, reals, etc. Some various points: - Oracle's primary three numeric data types are: Number, binary_float, binary_double. - Included for Ansi support: numeric, decimal/dec, integer/int/smallint, float, double, real number without any precision can store values (in 10g) from 1.0 x 10^-30 to 1.0 x 10^126. The storage is dictated by the size of the number and can be from 1 to 22 bytes. number(p,s): precision and scale. 15,2. number(n): a number of fixed length n digits. equivalent to number(n,0) numbers inserted into a number(p,s) that exceed the scale will be rounded to closes scale (i.e., 1.4999 inserted into a number(5,2) will be stored as 1.5) float is the equivalent of a precision-less and scale-less number field. they are synonymous. --- Q: How do I order my numeric results by actual numbers, instead of by "alphabetic" order? A: select * from number_order_test order by (col1+0); the col1+0 will convert the field to a number and then sort it properly, even if the col1 is defined as a varchar. See this example: drop table number_order_test; create table number_order_test (col1 varchar2(10)); insert into number_order_test values (2); insert into number_order_test values (20); insert into number_order_test values (21); insert into number_order_test values (3); insert into number_order_test values (1); insert into number_order_test values (10); insert into number_order_test values (100); select * from number_order_test order by col1; select * from number_order_test order by (col1+0); --- Q: What is a merge statement? What is the syntax of the Merge command? When is it appropriate to use? A: A merge statement is a combination insert/update ("upsert" in database parlance) command useful to replace older if-then-else clauses and combines inserts and updates based on a condition into a single statement. Note: do not use a join field in your update statement or Oracle will get confused. Example: MERGE INTO tableA USING tableB ON (tableA.pk = tableB.pk) WHEN MATCHED THEN UPDATE set tableA.field1 = tableB.field1, tableA.field2 = tableB.field2 WHEN NOT MATCHED THEN INSERT (pk,field1,field2) values (tableB.pk,tableB.field1,tableB.field2); CityDW real world example: MERGE INTO EDW_PASS.DIM_ORG_CODE EDW USING (SELECT * FROM STAG_PASS.DIM_ORG_CODE_TMP@DCSSTGP) TMP ON (EDW.ROOTID = TMP.ROOTID) WHEN MATCHED THEN UPDATE SET EDW.ORG_CODE_ID = TMP.ORG_CODE_ID, EDW.ORG_CODE_NAME = TMP.ORG_CODE_NAME, EDW.AGENCY_NAME = TMP.AGENCY_NAME, EDW.AGENCY_ID = TMP.AGENCY_ID, EDW.DIVISION_CLUSTER = TMP.DIVISION_CLUSTER, EDW.SCHOOL_TYPE_CLUSTER = TMP.SCHOOL_TYPE_CLUSTER, EDW.DCS_LAST_MOD_DTTM = SYSDATE WHEN NOT MATCHED THEN INSERT (EDW.ROOTID, EDW.ORG_CODE_ID, EDW.ORG_CODE_NAME, EDW.AGENCY_NAME, EDW.AGENCY_ID, EDW.DIVISION_CLUSTER, EDW.SCHOOL_TYPE_CLUSTER, -- EDW.DCS_ADD_DTTM, EDW.DCS_LAST_MOD_DTTM) VALUES (TMP.ROOTID, TMP.ORG_CODE_ID, TMP.ORG_CODE_NAME, TMP.AGENCY_NAME, TMP.AGENCY_ID, TMP.DIVISION_CLUSTER, TMP.SCHOOL_TYPE_CLUSTER, -- SYSDATE, SYSDATE); --- Q: Can you do a case statement in a where clause? Can you do a case statement in an update clause? A: absolutely: Select * from Meta_Mbr_Param where Case When Fiscal_Calendar_Start_Month = 'NOV' then 1 else 0 End = 1 -- another simple example create table test (key integer, flag char(1), text varchar2(20)); insert into test values (1,'Y','some text'); insert into test values (2,'N','some text'); insert into test values (3,'U','some text'); insert into test values (4,'N','some text'); insert into test values (5,'Y','some text'); update test set text = case when (flag='Y') then ('its now yes') when (flag='N') then ('its no now') else ('unknown') end; --- Q: How do you allow users to see the package body of code? A: grant execute on user.pkg only lets you see the specs in Toad and SQL Developer. Answer: create any procedure Somewhat of a flaw in oracle: "select any package body" or "select any procedure body" would be a more appropriate permission. As a result of this, developers who need to see the code within the pkg bodies of another schema's code need "create any procedure" which is a rather powerful permission. --- Q: How can I see a full list of packages and/or package bodies in my database? Is there a DBA_PACKAGE view like with other object types? A: NO. You'll have to query dba_objects like this: select * from dba_objects where owner='CGFSDW' and object_type='PACKAGE'; --- Q: How do I hide the body of a package/procedure/function? A: wrap it with "Wrapping" procedures. https://docs.oracle.com/cd/B28359_01/appdev.111/b28370/wrap.htm#LNPLS01604 You can wrap code one of two ways: 1. wrap the .SQL file at the command line to generate an executable file that has hidden the code: syntax: wrap iname=input_file [ oname=output_file ] example: wrap iname=sql7_table_index_stats_pkg_cgfsdw.sql will result in a file sql7_table_index_stats_pkg_cgfsdw.plb When you observe this .PLB file, you'll see that the create or replace package owner.name will have had "wrapped" attached to the create object line, and the sql is now encoded. 2. Call DBMS_DDL.CREATE_WRAPPED(package_text); in both cases; the text in dba_source is encoded. --- Q: Is it an oracle myth that you have to commit every X rows in a loop? A: as of 8i, YES. In previous versions probably not. From oracle-L discussions and ask tom's sites, in versions 6 and 7 it was common practice to commit every 50 or 100 rows in order to get decent performance or to avoid 0ra-1555's this code will commit every 50 rows: lc:=0 for d in C loop lc:=lc+1; insert into table values... if mod(lc,50) = 0 then commit; end if; end loop; commit; --- Q: Which is better, using an "in" clause or an "exists" clause? A: from asktom Select * from T1 where x in ( select y from T2 ) ; versus select * from T1 where exists ( select null from T2 where y = x ); In example 1 (the in clause), the subquery is evaluated and resolved, then joined to the original table, hence it can use indexing on T2 to return the result set quickly. Plus, the T1 query can make use of indexes resident on T1. in example 2, internally Oracle converts the where exists clause into a for loop, scrolling through every record in t1 and hence forcing a Full table scan on T1. When is the "where exists" more appropriate? Places where the T1 table result set is very small and doing the full table scan is quick. the "in" clause will always be more appropriate when the subquery result is small. If both the main and subquery are big, it may be a wash. Rule of thumb: BIG outer query and SMALL inner query = IN. SMALL outer query and BIG inner query = WHERE EXISTS. --- Q: Does union give you a free distinct and order by ? A: Yes! select col1 from union1 union select col1 from union2 actually is doing this: select distinct col1 from ( select distinct col1 from union1 union select distinct col1 from union2) order by col1; --- Q: How would you union a data set and NOT get the free distinct and order by? A: union all select col1 from union1 union all select col1 from union2 will get all results from both tables and will NOT sort it. --- Q: What is the opposite of a union? A: intersect query select t1.col1 from t1 intersect select t2.col1 from t2; --- Q: What is a subquery? A: simply put, a query within a query. example: select suppliers.name, subquery1.total_amt from suppliers, (select supplier_id, Sum(orders.amount) as total_amt from orders group by supplier_id) subquery1, where subquery1.supplier_id = suppliers.supplier_id; --- Q: What is a correlated subquery? A: Accepted definition: A correlated subquery that is evaluated once for every row of the outer query. It connects to the "main" query by virtue of the correlation variable. Examples: (example borrowed from searchoracle.techtarget.com select studentname , studentmark , ( select avg(studentmark) from students where class = t1.class ) as classaverage , ( select avg(studentmark) from students ) as schoolaverage from students t1 ; t1 is the correlation variable, and the two subqueries will be evaluated "for each" student in the main query. You can also have teh correlated subquery in the where clause to limit rows. Exmaple: select category , articletitle , articlepubdate from articles zz where articlepubdate = ( select max(articlepubdate) from articles where category = zz.category ) --- Q: how does a correlated subquery differ from a regular sub query? A: by the correlation; the correlated connects up with the rest of the query by way of the correlated variable. --- Q: What is the syntax of the "replace()" function? A: Similar to this command: this will turn a left parenthesis ( figure into a dash - select replace('($5,216.92)','(','-') from dual; -- this completely strips a negative dollar figure into a number for conversion efforts. select replace(replace(replace(replace('($5,216.92)','(','-'),'$',''),')',''),',','') from dual; --- Q: How do you do regular expressions in SQL in Oracle? A: A series of functions added to pl/sql to emulate unix style regular expressions. regexp_replace: similar to s/pattern/newpattern/ in setd/awk regexp_like: similar to the =~ syntax in perl Some examples: -- replace all "$" characters with blanks in a string select col2, REGEXP_REPLACE(col2, '\$', '') from re_test; -- strips out 5-digit zipcodes at the end of a string select regexp_replace('1100 15th St NW, 20005 Washington DC ','\d{5}$','') as clean_addr from dual; select regexp_replace('1100 15th St NW, WAshington DC 20005-1234','\d{5}-\d{4}$','') as clean_addr from dual; -- this will strip out whatever is after "unit" to the end of the string with select regexp_replace('1 SCOTT CIR NW UNIT 00001','UNIT..*$','') as clean_addr from dual; -- this only shows strings that contain one of these four strings select * from gis_overlay.fix_unknown_addresses where dcstat_addr_id < 1 and regexp_like(raw_address,'NE|NW|SE|SW') -- this will clean all CR/LF (carriage return/line feeds) from a field. update tboss.resolution_method_lookup set resolution_method = REGEXP_REPLACE(resolution_method, chr(10), ''); update tboss.resolution_method_lookup set resolution_method = REGEXP_REPLACE(resolution_method, chr(13), ''); --- Q; How do you convert currency to numeric values? A: select TO_NUMBER('<$1,000.87>', '$999G999G999D99PR') from dual; or, to replace the "<" with the accounting style ($5.00) do the following: select translate(to_char(-13214.2973,'999,999.0PR'),'<>','()') from dual or a series of replace() or regexp_replace() commands. --- Q: I'm getting an ORA-01722: invalid number when trying to do a basic query involving to_number. A: you probably have embedded spaces in one of the varchar fields being convertd. select ' ',to_number(' ') from dual; returns same error --- Q: Where is the list of Ansi characters within oracle, and how do you reference them? A: chr() function shows how to convert them, but can't find the list. And, the ascii characters listed at http://www.asciitable.com/ do NOT match what's in oracle. --- Q: I'm getting "ORA-22992: cannot use LOB locators selected from remote tables" error while trying to select a clob across a db link. How do I get around this? A: Oracle has some oddities when related to dealing with CLOBS across db links. These commands will lead to error: select pk,field1,clob from table@remotesvr; however, this will not: create table localtbl as select * from table@remotesvr; The solution to getting around the ORA-22992 is as follows: 1. select all fields in table except CLOB across dblink insert into localtbl select pk, field1, field2, ... from table@remotesvr; update localtbl lcl set clobfield = (select rmt.clobfield from table@remotesvr rmt where lcl.pk = rmt.pk); ---- Q: How do you send email out of an Oracle database? A: utl_smtp routine. metalink note: doc id 604763.1 has a great set of example scripts to test with. --- Q: How do you use arrays in pl/sql code? A: ?? need finishing... declare TYPE t_array IS TABLE OF VARCHAR2(2000) INDEX BY BINARY_INTEGER; strings t_array; strings(1) := 'abc'; strings(2) := 'def'; =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- PL/SQL Coding Specific: Dates only/Date Specific =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- --- Q: What are the caveats to working with dates in Oracle? A: - Oracle stores the date and time together like sybase but defaults to only show the date during selects. However, calls from perl code get date and time (depending on implementation used ... DBD::ODBC). Thus, you might have to trim off the time in some cases like this: select to_char(date_field,'mm-dd-yyyy') from table - To get the time, you must do something like to_char(date,'mm-dd-yy hh:mm:ss') - Oracle time only goes to the second; if you want milliseconds you have to design your own datatype. - This works: select TO_CHAR(systimestamp, 'HH24:MI:SS.FF4') from dual; - This doesn't: select TO_CHAR(sysdate, 'HH24:MI:SS.FF4') from dual; --- Q: What is the Oracle equivalent of Sybase's/MS Sql's getdate()? A: sysdate --- Q: Can you do a "between" command in Oracle with dates? A: Yes! select * from meeting where start_date between ('31-dec-2002') and ('2-jan-2003'); --- Q: What is the default way to insert a date from sql*plus/straight sql? A: "default way" depends on the NLS_DATE_FORMAT, which for most north american Oracle servers defaults to "dd-mon-yy" SQL> insert into table value ('dd-Mon-yyyy') You can also do insert any date by giving the mapping value of the dateformat so that oracle knows how to format it. Examples: SQL> insert into table values (to_date('05-27-99','mm-dd-yy')) --- Q: how do you insert time? A: Using a to_date formatted like this: select TO_DATE('12/16/1997 14:20:20' , 'MM/DD/YYYY HH24:MI:SS') from dual; select TO_DATE('12/16/1997 2:20:20 PM' , 'MM/DD/YYYY HH:MI:SS AM') from dual; --- Q: How do you trim off the time neatly when only desiring to show the date? A: - select to_char(date_field,'mm-dd-yyyy') from table or - select trunc(date) --- Q: Is there just a "date" field without the timestamp? A: No; date always includes time. If you want to default the time to midnight easily, insert into table values (trunc(date)); --- Q: How do you get milliseconds out of a date? A: 8i and below: DBMS_UTILITY.GET_TIME 9i and up: select systimestamp from dual; To get the fractional secons: select to_char(systimestamp, 'FF') from dual; --- Q: How do you do datediff in Oracle? A: select (sysdate+1 - sysdate) from dual; returns exactly 1; so the diff of two date/timestamps will be a percentage of a day in seconds or 86400 select 86400*(dateA - dateB) as secs_elapsed from dual; # this gets you the number of days between two dates alter session set NLS_DATE_FORMAT="DD-MON-YY"; select (to_date('02-FEB-1985') - to_date('21-MAR-1966')) from dual; # this gives years.days select (to_date('27-JAN-2002') - to_date('13-JAN-1983')) /365 from dual; # which you could take the results from and do the following... select 6893/365 from dual; select round(.0520*365) from dual; --- Q: How do I get current age? A: select round((sysdate-to_date('07-MAY-1971'))/365.25,2) as age from dual; --- Q: How many bytes of storage does a "date" field take? A: 7 bytes --- Q: How do you get first day and last day of the month for any given day? select to_char(sysdate,'YYYY') || to_char(sysdate,'MM') || '01' from dual; select to_char(last_day(sysdate),'yyyymmdd') from dual; or select last_day(to_date(20050206,'yyyymmdd')) from dual; --- Q: How do you change the default date format? A: NLS_DATE_FORMAT dw@dw30tst> select sysdate from dual; SYSDATE --------- 08-JUN-06 dw@dw30tst> alter session set NLS_DATE_FORMAT = "YYYY/MM/DD" ; dw@dw30tst> select sysdate from dual; SYSDATE ---------- 2006/06/08 dw@dw30tst> alter session set NLS_DATE_FORMAT = "MM/DD/YYYY HH24:MI:SS"; dw@dw30tst> select sysdate from dual; SYSDATE ------------------- 08/29/2007 12:21:12 dw@dw30tst> alter session set NLS_DATE_FORMAT="DD-MON-YY"; dw@dw30tst> select sysdate from dual; SYSDATE --------- 08-JUN-06 --- Q: What is the default NLS_DATE_FORMAT? A: depends on the NLS_TERRITORY variable, but for America it is "DD-MON-YY" --- Q: Can I change the NLS_DATE_FORMAT system wide? A: Yes; put nls_date_format = "YYYY/MM/DD" into init.ora --- Q: What is "timestamp with time zone" and these other new time-based datatypes in 10g? A: 10g adds several new date-based datatypes - TIMESTAMP WITH TIME ZONE - TIMESTAMP WITH LOCAL TIME ZONE - INTERVAL YEAR TO MONTH - INTERVAL DAY TO SECOND You can convert new 10g datetime values to viewable strings just as easily as before with to_char(). Example: select TO_CHAR(start_date, 'DD-MON-YYYY HH24:MI:SS') from dba_scheduler_jobs; --- Q: How do you get date parts of a date (like say just the year?) A: just change the date mapping in the to_char function year: select to_char(sysdate,'YYYY') from dual -- returns 2007 month: select to_char(sysdate,'MM') from dual -- returns 08 select to_char(sysdate,'MON') from dual -- returns AUG select to_char(sysdate,'MONTH') from dual -- returns AUGUST day: select to_char(sysdate,'DD') from dual -- returns 15 select to_char(sysdate,'DAY') from dual -- returns WEDNESDAY hour: select to_char(sysdate,'HH') from dual -- returns 3 select to_char(sysdate,'HH24') from dual -- returns 15 etc.. Search oracle manuals for "Datetime Format Elements" for the full list. --- Q: How can I get the first day of the year for a given day? A: select '01-JAN-' || (select to_char(sysdate,'YYYY') from dual) from dual; -- Q: Why does my simple to_date(datefield,'MM/DD/YYYY') fail with an 'invalid month' error? A: to_date will fail for obvious errors (passing in a month that is not between 1 and 12) but also fails if there is embedded spaces in the datefield. NULL in the field is ok, spaces no. these all work fine: select to_date('1/1/1970','MM/DD/YYYY') from dual; select to_date(' 1/25/1989 ','MM/DD/YYYY') from dual; select to_date('','MM/DD/YYYY') from dual; select to_date(NULL,'MM/DD/YYYY') from dual; these all give the error "ORA-01843: not a valid month." select to_date('0/1/1970','MM/DD/YYYY') from dual; select to_date('13/1/1970','MM/DD/YYYY') from dual; select to_date(' ','MM/DD/YYYY') from dual; --- Q: How do I get data for the last 24 hour period? A: select * from table where date > sysdate-1; --- Q: How do I get data for the last X hours? A: select * from table where date > sysdate-X/24; ---- Q: Microsoft stores dates down to 3 decimal places of a second. How do you do that in oracle? Q: How do you store fractions of a second for date/time stamps in oracle? A: timestamp(x) field. drop table datetest; create table datetest (col1 date, col2 timestamp, col3 timestamp(3), col4 timestamp with time zone, col5 timestamp with local time zone); insert into datetest values (sysdate,sysdate,sysdate,sysdate,sysdate); select * from datetest; select TO_timestamp('2010-11-08 16:58:18.083' , 'YYYY-MM-DD HH24:MI:SS.FF') from dual; ---- Q: I have time zone hours/minutes in my date. how do I work with them? A: Data example: select '2008-10-29T00:00:00-05:00' from dual; Answer: use to_timestamp_tz function select to_timestamp_tz('2008-10-29T00:00:00-05:00 ','YYYY-MM-DD"T"HH24:MI:SSTZH:TZM') from dual; --- Q: How do I easily extract specific date/time values out of a date/timestamp? A: extract() function. -- Returns the day of the month select extract(day from sysdate) from dual; --- Q: How do I store the time zone within my date/time stamp? A: define the field as "timestamp with time zone" example: create table datetest2 (col1 timestamp with time zone); insert into datetest2 values (systimestamp); select * from datetest2; select sessiontimezone from dual; select TO_CHAR(sysdate,'YYYY/MM/DD HH24:MI:SS') as local_time, TO_CHAR(new_time(sysdate, (SELECT EXTRACT(TIMEZONE_ABBR from sysTIMESTAMP) from dual),'GMT'),'YYYY/MM/DD HH24:MI:SS') as gmt_time from dual; select current_timestamp, systimestamp from dual; =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Oracle Concepts/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- --- Q: How do partitioned tables work? A: Basically allows a table to have multiple underlying tablespaces that it can spread data inserts across multiple tablespace segments. Ideally you'd have these on different i/o devices. You can also do loads, selects, dml, etc on a partition of data, instead of the whole table. 4 kinds of partitioning - range: Splits data based on ranges of data (ex: by month). Most common form. once partitioned by range, optimizer knows this and will only scan the particular partitions that are applicable. - list: like range, except instead of having the partitions divided by range, you can manually pick what values go on which partition. - hash: lets Oracle choose how to keep the partitions balanced. - composite: lets you use one style of partitioning, then have subpartitions underneath of another style. You can range partition on one field, then has partition the subpartitions. - interval: added 11gR1: similar to range but allows Oracle to auto-add partitions. - reference: added 11gR1: allows partitioning by a FK to a parent table. Underneath partitioned tables, you have several indexing options - Local partitioned indexes; indexes on the data within each partition. - Global partitioned indexes: index across all data in all partitions, but the index itself is partitioned. - Global non-partitioned indexes: a conventional index that references data across all partitions. Composite partitioning options by release (taken from Tom Kyte's new book 4/17/12) Range List Hash Range 11gR1 9iR2 8iR1 List 11gR1 11gR1 11gR1 Hash 11gR2 11gR2 11gR2 Interval 11gR2 11gR2 11gR2 11g updates form Composite partitioning options http://www.orafaq.com/wiki/Composite_partitioning Range-hash partitioning was introduced in Oracle 8i Range-list partitioning was introduced in Oracle 9i Range-range partitioning was introduced in Oracle 11g List-range partitioning was introduced in Oracle 11g List-hash partitioning was introduced in Oracle 11g List-list partitioning was introduced in Oracle 11g Interval-range partitioning was introduced in Oracle 11g Interval-list partitioning was introduced in Oracle 11g Interval-hash partitioning was introduced in Oracle 11g Hash-hash partitioning was introduced in Oracle 11gR2 --- Q: What are Pros and Cons to partitioning? Rather, what can partitioning do for you, and when should you consider NOT using it? A: (adaped from a Jonathan Lewis posting on c.d.o.s) Pros: 1. Performance: data set is large enough that "free precision" is necessary in order to get at the data quickly. Data Warehouses, massively large tables with consistent use of particular where clauses (dates, date ranges, specific pay periods, load keys, etc). Performance pruning can be great. 2. Administrative: databases with specific requirments to "archive off" data benefits from partitioning. 3. Making certain partitions read-only to reduce backup requirements. These tablespaces can also be compressed to save on i/o 4. use of alter-table exchange partition to do bulk loading outside of the main fact tables. Huge benefit to pre-index, pre-gather stats on data. Cons: 1. badly implemented partitioning will cause queries to work WORSE than if no partitioning was used. Especially problematic in DW shops with badly implemented ad-hoc tools. 2. queries sometimes don't make use of planned partitioning scheme. 3. added administrative issues; dml becomes more complicated, data loads become more complicated 4. Generally no RI, no validated FKs, and having global indexes will slow down alter table exchange partition. --- Q: What is the syntax for partitioning? A: Here's a simple "list" partitioning example create table emply (ld_key integer, text_field varchar2(255)) partition by list (ld_key) (partition hl_1 values (1), partition hl_2 values (2) ) tablespace ehri20qa; You can add a "default" partition as a catchall, but if you do, you cannot add any other "real" partitions afterwards. --- Q: What are some great links on partitioning? A: Syntax for all sorts of variations of composite partitioning: http://psoug.org/reference/partitions.html?PHPSESSID=70a81af6d8ec7c6746452d2c310e1d27 List of all the composite partitioning combinations: http://www.orafaq.com/wiki/Composite_partitioning http://www.oracle.com/technetwork/testcontent/o57partition-086998.html http://www.dba-oracle.com/t_interval_partitioning.htm http://docs.oracle.com/cd/E11882_01/server.112/e25523/partition.htm#CACHFHHF --- Q: Is there a maximum number of partitions per table? A: 9i: 65536-1 10g: 1024K-1 11g: 1024K-1 Note: this limit counts both partitions AND subpartitions ... so if you're using composite partitioning, its possible to reach this limit very quickly. (confirmed by "Logical Database Limits" link in Oracle's documentation. Logical limits http://docs.oracle.com/cd/B28359_01/server.111/b28320/limits003.htm Physical limits http://docs.oracle.com/cd/B28359_01/server.111/b28320/limits002.htm --- Q: Can you "move" a partition to another tablespace? A: Yes alter table TABLE_NAME move partition PARTITION_NAME tablespace NEW_TS_NAME; alter index INDEX_NAME rebuild partition PARTITION_NAME tablespace NEW_TS_NAME; --- Q: Can you change the partitioning scheme of a table in realtime? What kind of performance hit would that cause? A: No, you'd have to create a new table, then insert into new select * from old --- Q: Can you exchange partitions of a table with FK constraints? A: Not generally, no. The SQL Reference manual under alter table says the tables involved in the exchange must have the same PK, and "no validated foreign keys can be referencing either of the tables unless the referenced table is empty." So, if you've got FKs on your tables, they must be either disabled or altered to be "novalidate." --- Q: What is partition "pruning" or partition-wise joins? A: partition pruning appears in the optimizer output as "PARTITION RANGE ITERATOR" and basically means that the database is able to use the where clause to exactly select the partitions of data that are needed to resolve the query requirements. a "partition-wise join" occurs when parallel servers run joins between two partitioned tables. It can only occur if the join column is "equi-partitioned" (meaning the data is balanced between the two partitions). --- Q: What is the fastest method of partition pruning? A: Partition access paths commonly seen in explain plan output: PARTITION RANGE ALL; range type partitioning being used but no partition pruning occuring. Probably not including partitioning column in where clause PARTITION LIST ALL: same as above, except with list partitioning PARTITION HASH ALL: same as above except using Hash Partitioning PARTITION RANGE ITERATOR; this means the where clause forces the optimizer to iterate or repeat its actions over a range of connected partitions, starting with the Pstart and ending with Pstop. PARTITION RANGE SINGLE: the where clause limited the range to exactly one partition (identified by the Pstart/pstop column). PARTITION HASH SINGLE: same for hash PARTITION LIST SINGLE; same for list PARTITION LIST SUBQUERY PARTITION LIST INLIST; where an "in (1,2,3)" clause is used to partition prune. If you've turned on parallel, the above partition information gets lost sometimes. PX BLOCK ITERATOR PX PARTITION LIST SUBQUERY --- Q: Are there any caveats to working with partitioning? A: Just notes from working with them: - partition names cannot start with numbers - composite range partitioning may have great costs, but they're slower than doing partition and then subpartition pruning. --- Q: what is a quick and dirty way to see all the tables in my database using partitioning? A: select distinct table_owner,table_name from dba_tab_partitions where table_owner not in ('SYS','SYSTEM','SH','SYSMAN') and table_name not like 'BIN$%'; --- Q: How do you "move" the header construct of a partitioned table to a new tablespace? A: alter table person modify default attributes tablespace PERSON_ENC_DATA01; --- Q: What is the difference between global and local indexes? A: A global index references all rows, across all partitions, while a local index only looks at data within its specific partition. As if the local partition worth of data was its own stand-alone table. Globals can be good to use when you have querys that span partitions frequently, while locals can be good when you only have --- Q: How can you tell if an index is global or local? A: select * from dba_indexes and look for the "partitioned" field. --- Q: What is faster, querying a partitioned table by a global index or making use of partition pruning? A: It Depends: but in all likelihood it should be partition pruning, need to setup a test case to prove it. Odds are that the more data you have, the more partition pruning will help. --- Q: Does the "update global indexes" clause during alter table exchange partition execute a full index rebuild? A: No; its designed to only update those parts of the index that were affected by the alter table exchange partition command. Pros: the index stays usable throughout the operation Cons: much slower, generates a ton more redo/undo. 15% slower than a full index rebuild on asktom tests. It is suggested to use local indexes whenever possible. Search asktom for "update global indexes" and read the question that says, "Local versus Global indexes on daily changing partitioned table" --- Q: How can I tell what my tables are partitioned by? A: select * from DBA_PART_KEY_COLUMNS; --- Q: How do you work with Transportable Tablespaces? A: - First, read the chapter in the Administrator's guide on Managing Tablespaces (chapter 9 in 8i, chapter 11 on 9i doc sets). Key caveats that may prevent using transportable tablespaces: - your source and target OS must be the same - your source and tgt database server must be the same version - you must have the same block sizes, - set the tablespace in read_only mode (note, this may take some time). sql> alter tablespace ttstest set read only; - export the tablespace using the transport_tablespace=y flag. This exports just the structures of the objects in the tablespace. exp transport_tablespace=y tablespaces=(ttstest) TRIGGERS=y CONSTRAINTS=n GRANTS=n FILE=tts.dmp - import the tablespace - Note! You must log in to exp as a sysdba user to be able to use the transport_tablespace flag, and the syntax when scripting it can be tricky. Without specifying as sysdba you'll get the error: EXP-00044: must be connected 'AS SYSDBA' ..., and if you don't quote it properly, you'll get LRM-00108: invalid positional parameter value 'as'. This example works: exp \'sys/sys@stg30tst as sysdba\' transport_tablespace=y tablespaces=staging_holding file=tmp.dmp --- Q: I've tried to export with transport_tablespace=y, and get the following error: EXP-00008: ORACLE error 29341 encountered ORA-29341: The transportable set is not self-contained HOw do I fix this? A: - exec dbms_tts.transport_set_check('ts_name',true). (You must do this as sys unless you've granted "execute_catalog_role" to another user. This checks the tablespace for inter-tablespace dependencies. Tables can only be transported if there's no cross-tablespace constraints/dependencies. - SELECT * FROM TRANSPORT_SET_VIOLATIONS; this is where the above procedure reports its findings. If you find results here, they must be cleared before you can transport the tablespace. --- Q: Are there any caveats to working with Compressed Tablespaces? A: - there's a known bug preventing alter table add column in older 9.x versions. - index creation takes longer - still buggy in 9i: ora-600s occur when doing certain operations according to some DBAs (while looking for chained rows) HOWEVER, participants on Oracle-L report great great performance increases. Queries went from running in 3:45 to 6 seconds. Space savings was about 40% on data, 18% on indexes. --- Q: How do you setup Parallel Server in Oracle? What benefits does it really have? /parallel/ A: Pretty Easy - set parallel_automatic_tuning=TRUE - alter objects to be parallel (alter table X parallel or alter index X parallel) Benefits are phenomenal when you have partitioned tables on different i/o devices; simply turning on parallel server with a thrice-partitioned table improved I/O performance 42%. HOWEVER/Caveat! Parallel I/O is NOT the greatest thing to just turn on everywhere. Only use it where it makes sense, as I/O bottlenecks are exposed badly. Parallel usually forces FTS on objects that have covering indexes. If you see "PX_GRANULE()" function calls in your query plans, you're using parallel server to resolve the query. --- Q: How can you tell which tables/indexes have been altered parallel? A: degree field in dba_tables or dba_indexes. If 1, then not parallel. --- Q: If degree is "default" what does that mean? A: "DEFAULT" allows Oracle to automatically determine the degree of Parallelism against the table when a query is to use Parallelism. It depends on parallel_automatic_tuning parameter being set to true. show parameter parallel_threads_per_cpu to get an idea of how many parallel threads may be used, but it will be load driven. Note: in 11gR2 parallel_automatic_tuning is now depricated; see parallel_degree_policy going forward. --- Q: How do you configure parallel_degree_policy? A: See http://www.dba-oracle.com/t_parallel_degree_policy.htm http://docs.oracle.com/cd/E11882_01/server.112/e40402/initparams181.htm#REFRN10310 Three settings: - Manual (default): makes database act like it did in 11gR1 and before. - Limited: will do tables but not statement queueing or in-memory - Auto: Enables automatic degree of parallelism, statement queuing, and in-memory parallel execution. --- Q: how do you set the parallel degree for a table/index? A: alter table rdw.test parallel 18; --- Q: How do you set parallel degree to default? A: alter table rdw.test parallel; You cannot put "default" into the clause but by just altering parallel it overwrites whatever degree was there and sets it to default. --- Q: How do you interpret Parallel query explain plan output? A: See the "Using Explain Plan" chapter in the P&T Guide for full details --- Q: What is order of parallel definition? What takes precedence? Hints or parallel defined on table? A: three ways to define parallel: session wide, degree in the object (table/index) and hint within SQL statement. Answer: Hints override Session alters, hints override table/index definitions. Examples: ALTER SESSION FORCE PARALLEL QUERY PARALLEL 32; select /*+parallel(table,8) */ from... alter table X degree 12; Sources used http://www.tyson1.com/professional/parallel/px_enabling.htm https://docs.oracle.com/cd/A87860_01/doc/server.817/a76965/c22paral.htm Degree of Parallelism The degree of parallelism for a query is determined by the following rules: The query uses the maximum degree of parallelism taken from all of the table declarations involved in the query and all of the potential indexes that are candidates to satisfy the query (the reference objects). That is, the table or index that has the greatest degree of parallelism determines the query's degree of parallelism (maximum query directive). If a table has both a parallel hint specification in the query and a parallel declaration in its table specification, the hint specification takes precedence over parallel declaration specification. So, per Oracle's docs, The hint overrides any object-level declaration. --- Q: What does Query Rewrite (the "query_rewrite" option) really do for you? A: - It allows the optimizer to use existing Materilized Views instead of the actual tables, if the MVs can resolve the query better. It does this by transparently checking every query, and no change to application code is needed. - It allows users to create and/or use function based indexes - It allows the optimizer to resolve Data-Warehousing queries in a more OLAP based method (it allows for Star Transformations if the options are set and the query is a candidate) To use Query Rewrite functionality, these steps must be done: - in init.ora: QUERY_REWRITE_ENABLED=TRUE - in init.ora: QUERY_REWRITE_INTEGRITY=TRUSTED - grant query rewrite to user: to any user who needs it (consider granting query rewrite to "connect" role) - Create MVs (usually subsets of data or aggregates) with "ENABLE QUERY REWRITE" option set during creation. - Constraints on the underlying fact tables must be "rely novalidate" and must actually exist on underlying tables to get full rewrite capabilities. - Dimension objects must be created on any dimensions being joined in the query if you want full rewrite capabilities. You also need these server parameters in 9i (but hopefully these are already set) OPTIMIZER_MODE = all_rows, first_rows, or choose COMPATIBLE = 8.1.0 (or greater) --- Q: What are the three values for query_rewrite_integrity, and what do they mean? A: Trusted: uses only fresh MVs, and "trusts" that the constraint relationships are good. This depends on any RI constraints to be created with "rely novalidate" parameters. Enforced: uses only fresh MVs, and mandates that all constraints are active and validated Stale_tolerated: Uses fresh or stale MVS, and "trusts" that the constraint relationships are valid. --- Q: What is the implication of creating RI constraints with rely or novalidate, or specifying enable/disable? A: - enable: create a normal constraint - disable: create a constraint, but immediatley disable it. This allows an alter constraint enable... to be performed later on. - enable novalidate: tells Oracle to basically ignore RI in any of the existing rows, but enforce the constraint going forward. Can result in inaccurate --- Q: What does a Flashback Query do for you? A: ??? --- Q: What are the pros and cons of Tablespace Compression? A: Note: the below was written when doing compression testing on 9i. Things have vastly improved in 10g and 11g, and even more so with Advanced Compression. But, these numbers seem to still be Pros: less space consumed: compression ratios can be immense though: 10:1 space savings possible depending on the "fullness" of the tables (see example below) Cons: increased query response time, DML and load times. Most impact is at load. (50% more load time is a common estimate). inserts: 50% slower in bulk, same speed for incremental inserts updates: 10-20% slower deletes: 10% faster (but this seems surprising) selects: depends: if you're already I/O bound, compressed data makes selects go faster. Notes: Indexes and IOTs were always able to be compressed. In 9i, tables and MVs too. A table can be partitioned, and ideally is read only in nature. All bitmap indexes on the data will become invalid and must be rebuilt after setting the compress attribute. (b*tree indexes are not affected). Then issue an alter table command: alter table X move partition Y tablespace Z compress; Compressing a table changes its pctfree value to zero. This implies that compressed tables should be read-only, as setting pctfree to zero means that any attempts to insert or update will result in row movement. This is easily seen in the above stats where inserts were 50% slower. Jonathan Lewis has found that in practice Oracle leaves some small amount of free space even with pctfree=0% so that one or two updates can be survived without a row movement. It is suggested to perhaps purposely use a small pct free (less than 10 but greater than 0) so you keep row migrations to a minimum if you have to have DML operation. --- Q: How does basic Table Compression work? A: Oracle's basic table compression is not really "compression" in the same sense as we have come to understand it from ZIP files and other compressed archives. Oracle performs 'deduplication' of commonly found strings at the block level. A simplistic example is to replace common varchar strings with an abbreviated code which is then stored once and represented many times. The actual mechanism of compression of course is much more complex and involves interpretation of block headers. --- Q: How do you implement Table Compression? How much storage savings are possible with Compression? A: (note: some of this is taken from Jonathan Lewis series of article on allthingsoracle.com about Oracle compression). create table t1 compress basic (col1, col2 ...); In Lewis' example, the following scenarios led to different cost savings. 1. Baseline: creating a table with first 50,000 rows from all_objects. 2. CTAS compress basic 3. CTAS compress basic where 1=2, then insert into 4. CTAS compress basic where 1=2, then insert /*+ append */ into 5. create table normal, alter it compress basic 6. alter table move the table in #5 1. 714 blocks, tbl 10% free 2. 189 blocks, 0% free (73% block savings) 3. 644 blocks, 0% free (10% block savings) 4. 189 blocks, 0% free (73% block savings) 5. 714 blocks, tbl 10% free 6. 89 blocks, 0% free (73% block savings) Interesting: so essentially if you do CTAS as select, insert/append or alter table move you achieve the same results with a compressed table. This link has basic and OLTP compression testing side-by-side with Exadata's Hybrid Columnar Compression (HCC) testing. http://uhesse.com/2011/01/21/exadata-part-iii-compression/ --- Q: what is OLTP compression? A: a component of Advanced Compression that is designed to overcome the DML performance limitations of basic compression in OLTP environments. OLTP Compression == compress for all operations in Oracle versions < 11g Turning this on basically forces 10% pct free in blocks but maintains compression comparable to the above basic examples. OLTP compression only seems to work for Inserts; updates don't get the advantage. Lewis suggests manipulating the pct free by trial and error for your situation until you get to the point where your row migrations are mitigated. He also suggests having data partitioned in different TS with different pct free figures for older and newer data, presuming that older data won't be inserted/ updated as much. --- Q: What will Advanced Compression get you over Basic Compression? A: essentially, by purchasing the Advanced Compression you eliminate the downsides in basic compression related to performance slowdowns on update/delete operations. Study at DoState in Jan 2013 using Advanced Compression - 52% reduction in storage - 20-110% improvement in report execution - 55% reduction in ETL processing. Wow. What's the downside? Is it just cost? --- Q: how do you compress a table? A:? SQL> ALTER TABLE rdw.dim_ai_hcc_ql MOVE COMPRESS basic ; SQL> ALTER TABLE rdw.dim_ai_hcc_ql MOVE COMPRESS FOR QUERY low ; --- Q: how do you uncompress a table once you've compressed it? A: SQL> ALTER TABLE rdw.dim_ai_hcc_ql move nocompress; --- Q: What is Data Guard? A: A product that reads through redo log files for "completed" transactions and enacts them on a remote server. A form of replication. --- Q: How do you access outside database objects (say, tables in MS Sql Server) from Oracle? A: Oracle's Transparent Gateway, now knowns as Enterprise Integration Gateway. Apparently can also connect through an 3rd party ODBC driver (Merant? DataDirect?) --- Q: How do you connect msaccess to Oracle? How do you connect ms Access to oracle using ODBC? A: Steps: - (prerequisite: you'll need Oracle client installed on the PC in question). - Create an ODBC connection on your machine. 1. Start->control panel->administrative tools->Data Sources (ODBC) 2. Click system DSN, click Add 3. Scroll down available drivers and select your Oracle driver (if multiple versions use the version fo the client that you're actively using). 4. Populate the four fields: data source name and description are user-configurable (make them something meaningful, but you cannot use @ sign), then select your database from the pulldown (this should read from the tnsnames.ora in your Oracle client directory; if you don't see your database then your Client and/or tnsnames.ora aren't configured properly) and then the username you'll be connecting to. 5. Test the connection; you'll be prompted for the pwd and then it should test the oracle database connection. Note: you may have to hit "OK" prior to testing the connection to save the configuration 6. Open MS Access, create a new blank database, then Click "External Data" tab, pull down "More.." and hit ODBC Database. Select the new ODBC data source you just created and you should be prompted with a list of tables that the username can "see" upon login. 7. Select your tables, define the PKs if desired on import, and the tables should appear in Access. --- Q: What Operating systems are still supported by Oracle as of 10g? What operating systems have been EOL'd? A: Still active development for 10g: 3 primary release platforms: Solaris 64bit, Linux, Windows 32/64Bit Secondary 10g supported: AIX 5L 64-bit, Apple MacOS, HP/UX, EOL'd As of 9i: AIX 4.3, HP OpenVMS, As of 8i: AIX 32-bit, DG/UX, Fujitsu, HP Alpha, any 32-bit HP, Hitachi, IBM Numa/Dynex, at 7 or below: HP3000, --- Q: What is ASM? Automatic Storage Management? A: Acronym: Automatic Storage Management From 10g concepts manual: Automatic Storage Management automates and simplifies the layout of datafiles, control files, and log files. Database files are automatically distributed across all available disks, and database storage is rebalanced whenever the storage configuration changes. It provides redundancy through the mirroring of database files, and it improves performance by automatically distributing database files across all available disks. Rebalancing of the database's storage automatically occurs whenever the storage configuration changes. Basically, ASM takes away the need to assign tablespaces to filesystems and eliminates a common DBA headache of manipulating .dbf file locations. See more in the Space Management/ASM section --- Q: What is ACID compliance? A: They are a set of properties designed to guarantee that database transactions are processed reliably. http://en.wikipedia.org/wiki/ACID The acronym stands for Atomicity, Consistency, Isolation and Durability. - Atomicity: transactions either "work" or the "fail." There is no jeopardy of an incomplete transaction leaving the database in an unknown state. - Consistency: a database is always left in a "consistent" state post transaction. If a transaction were to cause an inconsistent state, then it is rolled back and an error produced. This includes integer fields not allowing fractions, referential integrity (pks and fks). - Isolation: data that is being modified (locked) by one application cannot be manipulated by another. This can lead to transactions blocking one another, but no simaltaneous transactions can occur. Modern databases often allow "dirty reads" to prevent selects from being blocked. - Durability: the ability of the database to never lose a committed transaction upon a DBMS failure. Even if a transaction has not been written to disk; if it has been "committed" it still can be recovered. Accomplished by the use of redo logs (or transaction logs in other databases). To say that a database engine is ACID compliant means that it follows all four features described here. In general, any major RDMBS is going to be ACID compliant. however, MySql has storage engines that are (by design) NOT ACID compliant to allow for certain operations. --- Q: What is the LTOM? A: The Lite OnBoard Monitor. See Metalink doc id 461052.1, 461050.1, 352363.1 The Lite Onboard Monitor (LTOM) is a java program designed as a real-time diagnostic platform for deployment to a customer site. LTOM differs from other support tools, as it is proactive rather than reactive. LTOM provides real-time automatic problem detection and data collection. LTOM runs on the customer's UNIX server, is tightly integrated with the host operating system and provides an integrated solution for detecting and collecting trace files for system performance issues. The ability to detect problems and collect data in real-time will hopefully reduce the amount of time it takes to solve problems and reduce customer downtime. https://metalink.oracle.com/cgi-bin/cr/getfile.cgi?p_attid=352363.1:ltom to download Environments: unix only. no windows. Tar file download. Solaris, Linux, hpux, aix, tru64. -- Q: What are tables in the format "MDRT_" in my schema? A: These are tables used when you create a spatial index structure on a table that has spatial or GIS data. -- Q: What is Oracle Change Data Capture (CDC)? A: A form of replication typical in Data Warehousing environments where just the rows of data that have changed are captured in a transactional environment. These rows are then saved into a "changed table" in the downstream/Dw environment. Equivalent to performing a minus operation on a table worth of data. Can be done synchronously as transactions occur or Asynchronously by analyzing transactions as they are written to the log files. In this respect, Asynch CDC is very similar to streams and is actually built upon Streams technology. Cannot support blobs, longs, xml, iots, etc. cannot capture any DML done no logging, unrecoverable. --- Q: Is oracle certified on Windows VM Ware? A: doesn't seem so Metalink doc id 249212.1 'Support Position for Oracle Products Running on VMWare Virtualized Environments [ID 249212.1]' http://wiki.oracle.com/page/vmware?t=anon http://oraclestorageguy.typepad.com/oraclestorageguy/2009/04/what-the-oracle-vmware-support-statement-really-meansand-why.html =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- RAC Specific/RAC/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Q: What is Oracle Real Application Clusters (RAC)? A: RAC: Real application clusters: Oracle's Database clustering where multiple databases can share one set of data files, allowing you to "plug in" additional cpus to add more computing power by adding nodes to the cluster. Introduced in 9i. Maintained with svrctl srvctl status database -d uc4prd1 --- Q: Are RAC and Spatial extra costs? How about data guard? A: RAC and Spatial cost extra, per cpu. Data guard comes free with EE. Apr2014 clarification: RAC is an extra cost for EE but is INCLUDED in SE. Weird. DC gov't definitely was paying for RAC licenses when using it, so it sin't free. --- Q: Where is the lookup table of compatibility between Oracle Clusterware (CRS), ASM versions and Database versions? I have multiple versions in my environment A: Oracle Clusterware - ASM - Database Version Compatibility [ID 337737.1] --- Q: What is the difference between v$views and gv$views? A: see http://neeraj-dba.blogspot.com/2011/04/what-is-difference-between-v-and-gv.html from http://psoug.org/reference/dyn_perf_view.html: V$ view are views based on X$ arrays. The GV$ views are Global V$ views that have, as their first column, the instance identifier (INST_ID). You should always use GV$ rather than V$ whenever possible. When you are in RAC, you should always use gv$views instead of the regular v$views to get a cross-RAC view of the data instaed of a per-instance view. --- Q: When I log into a RAC, how do I tell which node i'm in? A: select host_name from gv$instance where instance_number=userenv('instance'); a little easier select * from v$instance; =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Streams Specific/Streams/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- --- Q: What is Oracle Streams? A: A product that reads redo logs of a database and performs the replication of the statements. It uses Log Miner Good quickie setup article: http://www.oracle-base.com/articles/9i/Streams9i.php more informational links; http://www.nyoug.org/Presentations/2006/06/Deshpande_Oracle10gStreams.pdf http://www.oracle.com/technology/deploy/availability/pdf/AmadeusProfile.pdf http://oraclesponge.wordpress.com/category/the-best-of-the-oracle-sponge/ --- Q: How much additional database load does the Streams supplemental log cause? A: Per asktom, it depends. Supplimental logging will definitely add additional redo, so it probably is only an issue on very high redo logging environments where you're already "jammed" on redo. Most systems won't even notice the added load. 11/09: experience at DC shows that upstream capture takes 10-15% of PGA and cpu in a windows 32-bit environment. --- Q: Does supplimental logging need to be enabled on a database level to do streams? A: No, it does not. Just on specific tables. SELECT SUPPLEMENTAL_LOG_DATA_PK, SUPPLEMENTAL_LOG_DATA_UI FROM V$DATABASE --- Q: What are some general Streams performance tips? A: From Metalink Doc ID: 335516.1 "Streams Performance Recommendations" - Increase shared pool, streams pool sizes. - some hidden paraemters as referenced in the metalink document - use simple rules to guide propagation and capture. - use a "heartbeat" table to ensure capture checkpointing is done regularly. - use strmmon process to monitor - use v$streams_* views - do NOT use database wide supplimental logging. --- Q: What datatypes are unsupported in Streams? A: in 10g: BFILE datatype Simple and nested abstract datatypes (ADTs) Collections (nested tables and VARRAYs) Object refs XMLTYPE datatype Index-organized tables (IOTs) with LOB columns Tables using table compression --- Q: What are streams configuration best practices? A:: see "10gR2 Streams Recommended Configuration" Doc ID: 418755.1 Recommended Patches for Streams: Doc ID: 437838.1 --- Q: What are some good streams administration links? A: Doc ID: 273674.1, " Streams Configuration Report and Health Check Script" A series of scripts to quickly retrieve all the streams config info you need. RELATED DOCUMENTS ----------------- Note 297273.1 9i Streams Recommended Configuration Note 259609.1 Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk Note 224255.1 Steps To Setup Replication Using Oracle Streams Note 273674.1 Streams Configuration Report and Health Check Script Note 290605.1 Oracle Streams STRMMON Monitoring Utility Note 238455.1 Streams Supported and Unsupported Datatypes Note 265201.1 Troubleshooting Streams Error ORA-1403 No Data Found Note 437838.1 Recommended Patch for Streams --- Q: What are the high-level steps to really implementing streams? A: Steps to implementing Streams: 1. source server must be in archive log mode. CSR is NOT currently in archive log mode, so we'd need to turn this on. 2. Create a streams-specific tablespace and streams user account 3. Source/Target database parameter settings: - global_names set to true in both source and target. (CSR is false) - compatible 9.2.0 or higher (CSR is 10.2.0.3.0 so we're ok). - job_queue_processes 2 or higher (CSR is 10 so we're ok). 4 or higher in Metalink - streams_pool_size set to 200m or higher (CSR is set to 0, we'd have to modify). - There are other possible parameters to manipulate, see Metalnk doc for rcmdn configuration. So we'd have to set a couple of paramters. Setting global_names may cause other issues if we have multiple databases with the same db name of CSR in the dc government environment, but this is low probablity/lower risk. 4. create a dblink on the source system 5. Execute a series of setup commands on source and target 6. Setup supplimental logging on EACH table that we plan to stream at the source. The implication here is that supplimental logging will add a bit of load to the source system when performing DML on these tables. Note also that Streams works best if there is a primary key or unique key on each source table to be streamed. We won't know this until identifying the tables to stream with Motorola. 7. Configure the streams "capture" process at source. 8. Configure the streams "propagation" process at source. Steps 9-10: create destination table and grant privileges at destination. 11. run a quick command to grab the starting "system change number" or SCN at source; this is basically the starting point after which all database changes are then streamed. 12. Configure the streams "apply" process at the target db. 13. Start the capture process at source, start the apply process at target. Important Gotcha using streams: certain datatypes are unsupported. --- Q: What are the two main methods of configuring streams? A: UpStream and DownStream capture UpStream capture: Streams capture process occurs on the source database. A capture process calls logminer, which mines redo logs and creates LCRs (logical change records) to send downstream to a capture process located on the target database. Pros: Near immediate capture of changes Cons: does incur load on the source. Job queue process, capture process will take 10-15% of system resources (memory and cpu). 150mb of pga plus the cpu. DownStream capture: the capture process runs on the target server, capturing transactions out of the redo logs as they're shipped out. Pros: offloads load from the source database Cons: a bit slower than upStream streams capture to propagate (but not that much slower, 10-15 seconds versus almost instantaneous). How To Configure Streams Real-Time Downstream Environment Doc ID: 753158.1 How To Setup Schema Level Streams Replication with a Downstream Capture Process with Implicit Log Assignment [ID 733691.1] downstream capture has two sub-set methods of configuration; 1. archive log capture: the "old" way of doing it, when an archive log gets generated the committed transactions are sent down the pike to log_archive_dest. 2. redo log capture: improvement in 10gr2; allwos for much faster replication. --- Q: What are some key dba views to use to monitor streams activity? A: src: select * from v$streams_capture; select capture_name,state, state_changed_time from v$streams_capture; tgt: select * from dba_apply; select queue_name,apply_captured,status from dba_apply; select * from dba_apply_error; --- Q: What are the meanings of the state and status fields in the Streams monitoring views? A: from http://download.oracle.com/docs/cd/B19306_01/server.102/b14229/strms_capture.htm#i1014290 Values of STATE field in v$streams_capture: - INITIALIZING; Streams is starting up - CAPTURING CHANGES: Streams is analyzing the redo logs for transactions to send to the target database. - CREATING LCR; Streams is converting the database change to an LCR (Logical Change Record) - ENQUEUING MESSAGE: Queueing up the LCR to send - PAUSED FOR FLOW CONTROL: streams is enabled to propagate at the source, but the target apply process is either stopped or broken. Can also mean that the process has sent some data down to the target database and it is waiting for processing before sending more. - WAITING FOR REDO; If all changes have been processed, the streams capture process sits in this state, waiting for more database changes to be analyzed to send downstream. Others less frequently seen: - WAITING FOR DICTIONARY REDO - DICTIONARY INITIALIZATION - MINING - LOADING - EVALUATING RULE - SHUTTING DOWN - ABORTING Values of the STATUS field in dba_apply: - DISABLED: generally this means that the apply process has been stopped by hand. - ENABLED: the streams apply process has been started - ABORTED: the apply process has stopped in an error condition. ---- Q: I have deltas between source and target tables; what happened? A: You probably had streams apply errors. 1. select * from dba_apply_error 2. for each error, you can print the error (using print_transaction procedure in the Streams admin manual, chapter 22 EXEC oraadmin.print_transaction('10.32.1360232'); 3. You can try to re-run the transaction exec DBMS_APPLY_ADM.EXECUTE_ERROR('10.32.1360232'); 4. if you rerun it and the error clears, Oralce removes the error from dba_apply_error view. 5. If it error's again, you have to troubleshoot. 6. if you are sure the error is resolved, you can delete the error. EXEC DBMS_APPLY_ADM.DELETE_ERROR('10.32.1360232'); --- Q: What are probable causes for streams apply errors? A: This doubles as a list of things to do prior to implementing streams - Lack of a PK on a source table; with no PK, Streams can't figure out what records to update and might throw "ORA-01422: exact fetch returns more than requested number of rows." Solution; only stream objects with PKs or UKs. - Not having matching PKs on target to what is in source. Even if you have a UK, need to have a PK. - Missing grants/lack of privileges: if Streams tries to insert a record and then grant a select to a non-existant role/user, "ORA-01435: user does not exist for the statement" Solution: grant streams user DBA, unlimited tablespace, or fine tune permissions. - Performing non-logged operations on a source table, resulting in "ORA-01403: no data found" errors when Streams goes to update a record it thinks exists. Solution: don't do non-logged operations on source databases (such things as sql*loading with nologging, insert/append, etc) - Lack of supplimental logging in source; Streams won't be able to find transactions. solution: use force logging, enable supplimental logging on all source tables. - Updating the target tables directly without having 2-way replication; this can cause no data found errors since Streams expects data to be as it left it, then can't match records. Solution; make target read only or create streams as two-way. - Export/import table of data during active transactions sometimes will "miss" certain transactions. Immediately following exp/imp, perform a minus operation and manually sync up any record deltas. --- Q; How do I tell what tables are being streamed? A: select * from dba_apply_instantiated_objects; --- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Installation/Configuration Theory =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- --- Q: What is the relationship between $ORACLE_HOME and $ORACLE_BASE? A: - $ORACLE_BASE is the default location used by the installer to install documentation, admin and other files. - $ORACLE_HOME serves as the Oracle software home (lib, bin, etc). $ORACLE_BASE is meant, in dbca, to be the home "directory" that all data, control, rollback files get created. --- Q: How do you add a new server to your client list? A: Two ways: - hand edit $ORACLE_HOME/Network/Admin/tnsnames.ora, being extremely careful not to include tabs - Net Assistant 8, which is the front-end wizard to tnsnames.ora and which I've never actually see work properly. (This is aking to modifying the interfaces file (or sql.ini) for Sybase) --- Q: What is my database name? What database am I logged into? A: If already logged in: SQL> select * from global_name; -- this gets your hostname SQL> show parameter instance; -- this gets your instance name SQL> show parameter db_name; -- this gets db_name --- Q: What is my hostname? A: select host_name from v$instance --- Q: How do I troubleshoot client-server connections in oracle? typically ORA-12545 A: - look at $oracle_home/network/admin/sqlnet.log file. - ensure you can ping the IP or hostname in question fromthe client. this can be found in the "host" parameter of the ORA-12545 error in sqlnet.log - Run net8 assistant to create the connection info. You can also test connectivity from this tool, either at create time or from the connections list. - $oracle_home/network/admin/tnsnames.ora is the "interfaces" file. Check it for errors. Make sure there's ONLY spaces, not tabs. - tnsping can run from command line and test connectivity. - confirm there are not multiple "ORACLE_HOME"s on the machine. (run regedit, hkey_local_machine/software/oracle and check) - Confirm there's only one tnsnames.ora ora-12154: typically means you're trying to specify a SID that does not exist in tnsnames.ora ora-12545: usually means the sid in tnsnames.ora does not match the sid on the server ora-01017: wrong username/password ora-12541: no listener: probably means Oracle is totally down, the listener is down or the machine is off the network --- Q: I'm still getting an ora-12154, what other things can I check? A: - tabs in the tnsnames.ora: change to spaces - configure tnsnames.ora through the Net config manager to ensure syntax correct. - ensure $ORACLE_HOME variable is set properly in the PC environment variable section. right click my computer, properties, advanced, click "environment variables" --- Q: Where is the server's errorlog? Where are System messages written to? (where is the alert log?) A: - v$parameter "background_dump_dest" for the actual directory. select name, value from v$parameter where name like '%background_dump%'; Oracle 10g and below: See path in init.ora file, usually its: $ORACLE_BASE/admin//bdump/Alert_.log (however, $ORACLE_BASE can be anything ... and isn't always $ORACLE_HOME. Always check the bdump location). WinNT: $ORACLE_BASE/admin//bdump/alert_.log ... and ORACLE_BASE typicaly gets set to c:\oracle. 11g: moved from its default location, now installs into $ORACLE_BASE/diag/rdbms///trace eg: /u01/app/oracle/diag/rdbms/eid1_palm/eid1/trace select * from v$diag_info; for all these directory locations and more. --- Q: How do you "move" the alert log location? A: Change the background_dump_dest parameter and stop/start instance. --- Q: How can I write custom messages to my alert log? A: answered on Oracle-L by Jonathan Lewis 10/25/04 dbms_system.ksdwrt(1,'test') writes to alert log dbms_system.ksdwrt(2,'test') writes to session's trace file dbms_system.ksdwrt(3,'test') writes to both see also dbms_system.ksddt - writes a date-time stamp dbms_system.ksdind(N) - indents text using ":' characters dbms_system.ksdfls - flushes the write to file Also, from oracle-l, 11/1/04 by J.C.Reyes jreyes@dazasoftware.com Executing as SYS SQL> begin 2 dbms_system.ksdwrt(3,'________________'); 3 dbms_system.ksdwrt(1,'Writing Alert.Log file') ; 4 dbms_system.ksdwrt(2,'Writing Trace File'); 5 dbms_system.ksdwrt(3,'Writing Both Alert and trace file') ; 6 dbms_system.ksdddt; -- writes a date-time stamp 7 dbms_system.ksdind(11); -- indents text using ":' characters 8 dbms_system.ksdfls; -- flushes the write to file 9 dbms_system.ksdwrt(1,'Writing Alert.Log file') ; 10 dbms_system.ksdwrt(2,'Writing Trace File'); 11 dbms_system.ksdwrt(3,'Writing Both Alert and trace file') ; 12 end; --- Q: What is the difference between an Instance and a Database? A: - The "database" is your actual data - the "instance" is the memory-based server components that allow access to your data. --- Q: What is OFA? A: Oracle Flexible Architecture; basically oracle's configuration guidelines for installing servers. If defaults are taken for values, its called "taking OFA recommendations." Originally written in 1995 by Cary Millsap (cary.millsap@hotsos.com), it described a logical architecture and configuration example for Oracle installations. --- Q: What are some good OFA rules of thumb for overall system configuration? A: (started w/ Howard Rogers post to c.d.o.s 1/24/01) NOTE: some of these are definitly dated recommendations... - SGA size to be no more than 1/3 total physical RAM - shared pool 2-3 times the size of buffer cache (about 100mb) - Buffer cache start at 4000 * block size (usually about 32mb) - small log buffer - 8k block size on most unix boxes, 16k for nt (make it match the file system block size, typically 8k on unix). However, if you're using raw data files or direct i/o os (like nt/xp) you can go larger. - Use LMT w/ a uniform extent size. Setup your tablespaces with initial and next extents equal, and set to varying sizes depending on your expected table growth (small tables, 160K, medium 5M, large 160M) These numbers come from the SAFE document. ---- Q: How do you tell what version of Oracle we're running? How do you tell what patch level of Oracle we're running at? A: - log in via sqlplus, the server prints out Version info. - The Alert log file prints the version at boot time. The first three numbers in the version (eg the 8.1.7 of 8.1.7.0.0) are the version, the last two are the patch level. BEST: these two seem to give more or less the same output: first one better sql> select * from v$version; sql> select * from product_component_version; --- Q: What is opatch? A: Oracle's patch application utility for applying patches to the binaries that run Oracle product Installed by default in $ORACLE_HOME/OPatch --- Q: how do you install Opatch? A: Master Note For OPatch (Doc ID 293369.1) see Note 274526.1 How To Download And Install The Latest OPatch Version --- Q: How do you tell what bundle patch you're running? Aka 11.2.0.4.X? A: More complicated and not entirely straightforward. opatch lsinventory|grep "Patch description" Other helpful commands: ./opatch lsinv -bugs_fixed | grep ^P select * from sys.registry$history order by action_time desc; select * from dba_registry_history order by action_time desc; opatch lsinv -bugs_fixed | egrep -i 'bp|exadata|bundle' --- Q: How do you tell what patches have been run on a database server? A: SQL> select * from dba_registry; -- lists all componenets and what version they're at $ opatch lsinventory -- lists all cpus, one-offs and the like --- Q: How can you tell what patches have run if opatch is broken? A: You probably cannot; opatch is what reads the binaries and checks to see what's been run and reports at a far greater detail level than what's in the database. -- Q: how do you run "Check Conflicts" or CheckConflict to see if a patch conflicts? A: - download the patch - install the zip file to the server in question - run ./opatch prereq CheckConflictAgainstOH and pass in the path to the patch zipfile. example: $ opatch prereq CheckConflictAgainstOHWithDetail -ph ./ --- Q: How do you know what the patchset number is for a particular release? A: Check metalink: you download patches by patch id. http://otn.oracle.com/support/patches.htm website no longer works Quick Reference to Patchset Patch Numbers [doc ID 753736.1] this metalink has a list of the patch ID for all the major releases. Updated 2/17/16 from docif 12.1.0.2 21419221 12.1.0.1 Base patch release: downloads.oracle.com 11.2.0.4 13390677 11.2.0.3 10404530 11.2.0.2 10098816 11.2.0.1 Base patch release: downloads.oracle.com 11.2.0.1.2 9654983 11.2.0.1.1 9352237 11.1.0.7.4 9654987 11.1.0.7.3 9352179 11.1.0.7.2 9209238 11.1.0.7.1 8833297 11.1.0.7 6890831 10.2.0.5 8202632 10.2.0.4.5 9654991 [overlay PSU] 10.2.0.4.4 9352164 10.2.0.4.3 9119284 10.2.0.4.2 8833280 10.2.0.4.1 8576156 10.2.0.4 6810189 10.2.0.3 5337014 10.2.0.2 4547817 10.2.0.1 Base patch release: downloads.oracle.com 9.2.0.8 4547809 9.2.0.7 4163445 9.2.0.6 3948480 9.2.0.5 3501955 9.2.0.4 3095277 9.2.0.3 2761332 --- Q: What is the process for getting patches from Oracle? A: You must have a Support Identifier, aka Service Access Code, to access Oracle's Metalink site. You can't get a SAC unless you have a service agreement. Otherwise, you simply download the patches you need from Metalink. metalink.oracle.com, click patches and then select your database OS version. You may also download patches using any FTP client by connecting to updates.oracle.com. Connect with your Metalink username and password and read the welcome banner for more instructions --- Q: How do you install a patch? A: Answer: it depends. Major releases typically come with a runInstaller java program. One-offs or more obscure patches install with opatch. Usually runs in 30mins - 2hrs. Follow readme instructions to the letter, because often times various things need to be done to get a patch installed. Major releases usually take exactly 2hours. There are a couple of standard post-patch installation steps typically done: SQL> @?/rdbms/admin/catpatch.sql (takes an hour) SQL> @?/rdbms/admin/utlrp.sql (takes 5 minutes) Update: 10g removes the catpatch and utlrp processes and provides a gui front end called "dbua" that executes them for you. --- Q: What is the Windows equivalent of opatch? A: OPatch.bat: usually installed in ${ORACLE_HOME}/OPatch/opatch.bat set ORACLE_HOME=c:\oracle\product\10.2.0\db_1 cd ${ORACLE_HOME}/OPatch/opatch.bat opatch lsinventory --- Q: What is a quick and dirty way to grep through the alert log for errors? A: grep "ORA-" $ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_$ORACLE_SID.log | egrep -v "7324|165[3-4]" | more Note; this does not give you any date-time stamps. Suggest editing the file and searching for specific error codes as seen in here. --- Q: How can you tell on an Oracle box when the server is up and running? (aka, the uptime, or how long A: The Alert log file gets these two lines at start and end of boot: "Starting ORACLE instance (normal)" ... "Completed: alter database open" when completed. Another way: sql> select min(first_load_time) from v$sqlarea; Others: sql> SELECT startup_time FROM v$instance sql> --- Q: How do you startup/shutdown the server? A: - Unix: In the $Oracle_Home/bin directory, look for dbshut, dbstart. These scripts depend on the /var/opt/oracle/oratab to be configured correctly (as in, have a "Y" in the third field). - Win 2k/NT/xp and 9i: Oracle Administrative Assistant - Win 2k/NT/xp: svrmgrl, connect internal, startup (8i only; this is obsoleted in 9i) $ sqlplus /nolog sql> connect sys as sysdba (password) sql> startup (or shutdown as the case may be) dbstart and dbshut should work too, if the Oracle installation worked properly and your /var/oracle/oratab is properly configured. --- Q: Where is svrmgrl in Oracle 9i? A: Desupported; all the functionality put into sqlplus. % sqlplus /nolog SQL> connect sys/password as sysdba or % sqlplus "/ as sysdba" --- Q: What does the /nolog flag do for sqlplus? A: Basically, start sqlplus but do not log (nolog) into a particular schema. This essentially emulates the 8i and previous's svrmgrl program. /nolog establishes no initial connection to Oracle Database. --- Q: What does the / flag do for sqlplus? A: / is a default logon using operating system authentication ---- Q: What are the default passwords set by the Oracle installer s/w? What are the default accounts automatically created? A: 8i and before: - internal: oracle - sys: change_on_install - system: manager - scott/tiger - dbsnmp/dbsnmp Oracle Developer 2000: example database is Scott/Tiger 9i: installer requests a password for sys and system on install. However, if you crash the installer or quit out of it before it finishes, sys and system default to their 8i defaults. - scott/tiger - dbsnmp/dbsnmp Oracle Enterprise Manager v2.1: sysman/oem_temp 10g: dbca gives option to assign same pwd to sys, system, dbsnmp, sysman at installation. So, odds are that the sysman and dbsnmp passwords are set to your sys/system pwds. You have to manually enter them at db creation time Exadata: Most of the default are welcome1 --- Q: Why are there two "system" accounts? Why have SYS and SYSTEM? A: - SYS: owns the data dictionary tables, is a Privleged user and thus can do startups/shutdowns, backups and recoveries, etc. In 9i, Can only connect as sysdba - SYSTEM: an all powerful user that can see everyones objects within a server. Simply: SYS "owns" the database. SYSTEM "manages" the database. --- Q: What is the purpose of the users/schemas installed by default? (That are left "open" by default in 9i) A: - SYS, SYSTEM: system accounts used to perform DBA activities (see prev. question) - SCOTT: connectivity test, small relational model (bonus, dept, emp, salgrade) - OUTLN: owns stored outlines for SQL stability purposes - DBSNMP: Required for the Oracle Intelligent agent Many other accounts installed via "example" schema, if you install it: (HR, OE, PM, SH, QS, QS_CB, QS_CBADM, QS_CS, QS_ES, QS_OS, QS_WS) Other accounts normally installed by default but left locked: - ctxsys: Oracle Text/Intermedia/ConText - cwmlite: Oracle OLAP - mdsys: Oracle Spacial - odm/odm_mtr: Oracle Data Mining - olapsys: OLAP Services - ordplugins/ordsys: Oracle InterMedia - wkproxy/wksys: Oracle Ultrasearch - wmsys: Workspace Manager - xdb: xml database --- Q: what is a good example SQL to see all non-default users on a database? A: select * from dba_users where username not in ('ANONYMOUS','CTXSYS','DBSNMP','DIP','DMSYS','EXFSYS','MDDATA','MDSYS','MGMT_VIEW', 'OLAPSYS','ORACLE_OCM','ORDPLUGINS','ORDSYS','OUTLN','SDE','SI_INFORMTN_SCHEMA','SYS','SYSMAN','SYSMON', 'SYSTEM','TSMSYS','WMSYS','XDB','QUEST_SPOT','PERFSTAT', 'SCOTT','HR', 'OE', 'PM', 'SH', 'QS', 'QS_CB', 'QS_CBADM', 'QS_CS', 'QS_ES', 'QS_OS', 'QS_WS') order by username --- Q: What is the purpose of all the default tablespaces that get installed? A: by TS: - cwmlite: used by user olapsys, stores objects for OLAP Services - drsys: used by user ctxsys, home tablespace for Intermedia/Context Server. - example: Holds all the demo schemas (see above question for a list). - indx: initially empty, designed to hold indexes for normal users. - odm: holds all objects for user ODM, ODM_MTR (Oracle Data Minding) - tools: initially empty, used by RMAN for one - users: initially empty, designed to be home tablespace for normal users - xdb: holes all xdb schema objects, which is the XML database --- Q: How do I connect to Oracle if I don't know any of the login passwords? A: - Option a: change an existing account by connecting as sysdba (see next) - Option b: dig through the files looking for it. Most Oracle dbas embed passwords in admin scripts. Do a crontab -l as unix user oracle, look for a backup script. - option c: as user oracle: $ sqlplus /nolog SQL> connect / as sysdba Connected: --- Q: How do I change a password if i've lost it? A: Several answers - Windows NT/XP/W2k: make sure you're logged in as a user in the "ORA_DBA" group (usually, everyone installs via Administrator anyway) run $ORACLE_HOME/bin/orapwd file=c:\oracle\ora81\database\PWD.ora password= then sqlplus connect sys/@sid as sysdba - Unix: make sure you're a user in the "dba" group in /etc/groups OR % sqlplus /nolog sql> connect / as sysdba sql> alter user identified by ; You can change sys, system, any normal user's password this way. sql> alter user system identified by your_new_password; You can also on 8i and below connect internal from sqlplus or svrmgrl. Once connected internal, you can change whatever. --- Q: I just tried to change my user password and got an Ora-28007: the password cannot be reused How do i fix this so I can use the same password? A: Change the password related limits in the profiles. select PROFILE, resource_name, limit from dba_profiles where resource_name in ('PASSWORD_REUSE_MAX','PASSWORD_REUSE_TIME'); --- Q: What steps does the Oracle Installer go through when installing? Specifically, what pieces of information should the installer have beforehand before installing the product? A: See separate Installation notes file. --- Q: What are the default ports/port numbers used by Oracle A: - 4443, 7778: apache http module - 2030: mts port - 1521: oracle listener - 1158: 10g OEM webserver --- Q: Are there any port number limitations in Oracle? Does my port number have to be in the 1500-1599 range? A: There does not seem to be, but there are some "best practices" and "well known" port numbers (taken from http://www.databasejournal.com/features/oracle/article.php/3332361/Connecting-with-Oracle-Oracle-Ports.htm) http://www.iana.org/assignments/port-numbers reports that ports 0-1023 are "protected" by the base tcp/ip protocol. Oracle officially "owns" 2483 and 2484 (one normal tcp, one for ssl based traffic). Prediction is that the 1521 port (which is officially "owned" by another company nCube) may be changed in the future. --- Q: How do you check the validity of a database data file? (What is oracle equivalent of Sybase's dbcc)? A: dbv --- Q: How is the SGA comprised? A: From Oracle-L conversation 9/20/03, multiple posts. Updated 7/19/07 SGA = x + y + z + ? where x = default buffer pool (dbblksize*db_blk_buf OR db_cache_Size if 9i) y=shared_pool z=java pool, log_buffer ?=smaller fixed areas, db_buffer_cache, optional buffer pools (such as large_pool, keep pool, recycle pool). 268m + 117m + 117mb + ? - Oracle allocates memory automatically in different sized chunks, depending on the overall size of SGA. * 128Mb or less: 4mb * 128mb+: 16MB on Unix, 8mb on Windows - select * from v$sgastat or v$pgastat for stats. - OFA recommends setting SGA size to be no more than 1/3 total physical RAM. However, more modern systems can handle far more. But this is a very old recommendation for 7.x or 8.x versions. SQL> show sga prints output like this: Total System Global Area 3843528736 bytes Fixed Size 735264 bytes Variable Size 1157627904 bytes Database Buffers 2684354560 bytes Redo Buffers 811008 bytes Fixed: ? Variable: shared_pool size Database buffers: db_cache_size + any db_?k_cache_size + keep or recycle cache sizes Redo Buffers: the log_buffer parameter (default 8mb, should always increase) --- Q: How is the PGA comprised? A: Using Metalink Doc id 223730.1 "Automatic PGA Memory Management" http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/memory.htm#i8451 Official definition from oracle Manuals: A PGA is a memory region that contains data and control information for a server process. It is nonshared memory created by Oracle Database when a server process is started. Access to the PGA is exclusive to the server process. There is one PGA for each server process. Background processes also allocate their own PGAs. The total memory used by all individual PGAs is known as the total instance PGA memory, and the collection of individual PGAs is referred to as the total instance PGA, or just instance PGA. You use database initialization parameters to set the size of the instance PGA, not individual PGAs. Contents of PGA - Session Memory; holds session variables and login information per process. - Private SQL Area; a. Cursors and SQL Areas: cursor information, controlled by open_cursors. b. Private SQL Area Components: i. Persistent areas, which hold bind variables ii. run-time areas, which containts query execution state information and some SQL work areas. c. SQL Work Areas: work areas to support memory-intensive operators, such as sorts, hash-joins, bitmap merges, bitmap creates. --- Q: What happens if I run out of PGA? A: Ora-04030: "ORA-04030: out of process memory when trying to allocate string bytes (string,string)." --- Q: Is there ever a case where (paradoxically) I can LOWER PGA to avoid out of PGA errors? A: ??? Perhaps in cases where the pga limits actually extend beyond the shared memory limits on the server? Happens in 32-bit all the time. --- Q: How much PGA am I consuming right now? How much SGA am I consuming right now? A: select * from v$pgastat order by value desc; select max(pga_used_mem), max(pga_alloc_mem), max(pga_max_mem) from v$process; select * from v$sgastat order by bytes desc; --- Q: What is the difference between PGA and SGA A: Simplistic answer: SGA holds things that are shared between users, while the PGA holds things that are private to a user. SGA holds database buffers PGA holds compiled copies of code, cursors, and tempuser memory for processing. --- Q: How do I tell if i'm running 32-bit or 64-bit Oracle? A: Several answers; stolen from www.oracleadvice.com - Check your OS; you must be running a compatible 64-bit OS isainfo -kv on solaris, getconf KERNEL_BITS on hp/ux. - Check your db: % file $ORACLE_HOME/bin/oracle (it will say its 32- or 64-bit) SQL> select address from v$sql where rownum < 2; If its 32-bit, the address will be stored in a 8-character hex string. If its 64-bit, the address will be in 16 hex characters. sql> select paddr from v$session where rownum < 2; sql> SELECT dbms_utility.port_string FROM dual; /* will return a string that will say 64bit if its 64-bit */ do NOT depend on select * from v$version looking for "... with 64-bit option." this is NOT reliable and in some cases is wrong. --- Q: What are the processes that Oracle starts, and what do they all do? A: Unlike Sybase, where all the processing is done within one process, Oracle starts individual unix processes for each task. Unless otherwise specified, processes in format "ora_XXXX_SID" (here if "N" then a number, if "n" then part of the process name) (in alphabetical order) - arch/arcN: Archiver process (if started) archives redo log to a backup medium. Works after a log switch (when one redo log fills up). Max # controlled by db parameter - bspN: Buffer Server processes: 10max - cjqN/snpN: Job queue processes: run batch jobs. 36max. Spawns jNNN processes - ckpt: Checkpoint process: updates the headers of all datafiles. - cNNN: c001: Streams Capture processes - dNNN: Dispatcher processes: allows users processes to share resources. - dbwr/dbwN: Database writer: writes modified blocks in memory to disk. 10max - emnN: Event monitor; used w/ Oracle Advanced Queing - FMON: database communications w/ external storage vendors - iNNN: I/O Slaves: emulates asynch I/O - jNNN: j000: Job Queue Processes - lckN: Lock Processes. 10max - lgwr: periodically writes changes to data sitting in memory to the redo disk files. Writes to disk on commit, every 3 seconds, when the redo log buffer is 1/3 full as necessary. - lmdN: Lock Manager Daemons: used in parallel environments - lmon: Lock Monitor - lms: Lock Manager Server: RAC specific - mman: Manages Automatic Shared memory; serves as SGA memory broker, coordinates 10g auto sizing of various memory components - mmnl: Lightweight Manageability collection tasks (session capture history, metrics) - mmon: 10g background process to collect stats for AWR - mNNN: MMON background slave. - pNNN: Parallel execution slaves: runs any parallel operations. 256max - pmon: Process Monitor: cleans up behind processes, reclaiming resources - psp0: Process Spawner 0 - qNNN: q000: Advanced Queueing slaves - qmnc: AQ Advanced queuing coordinator; spawns qNNN processes - qmnN: Queue monitor; used w/ Oracle Advanced Queing. 10max - rbal: Rebalance activity for Automatic Storage Management environments - reco: Recoverer process. Used in distributed database configurations, resolves failed transactions between nodes. - sNNN: Shared servers: same as dedicated servers, but sharable. - shad: Oracle Shadow process; every connection in Windows creates a SHAD process. Connect v$process to v$session to get the session program name. - smon: System monitor; cleans up behind users, reclaiming resources. Also does crash recovery if necessary at startup. - trwr: writes trace files. - wmon: Wakeup Monitor: obsoleted in 8i Others: - $ORACLE_HOME/bin/tnslsnr: the active listener, looking for traffic on the oracle port designated in listener.ora for the server. - $ORACLE_HOME/jdk/bin/java: starts the Java server that services as 10g grid control web console - $ORACLE_HOME/bin/emagent: allows communciation to 10g grid control dbconsole - 10g also starts a persisitent perl connection w/ the enterprise mgr dbconsole --- Q: How does asynch i/o work in Oracle? Is there any advantages to using raw disks over file system based files, as with Sybase? A: - pre 9i, the use of Raw disks to enable async i/o was recommended. 9i introduces Oracle Cluster File System (OCFS), which is claimed to be a raw replacement and to be just as fast as raw i/o access. In practice, several DBAs have noted that OCFS 1.0 is not nearly as fast as raw, but that 2.0 (expected 3Q03) is supposed to be better. The same issues exist working w/ raw disks versus filesystems on Oracle: - async i/o - write without operating system buffers - full use of the space of the raw-device - raw disks let you use a larger block size, since you're not bound to the 8K file system block size. - Oracle apparently uses something called "sync write" to ensure that writes are done through the file system buffers before returning to the database, eliminating the common Sybase concern w/ buffered FS writes failing before they get to disk. - Use of Concurrent I/O over AIX reports near-raw performance. See white paper: http://www-03.ibm.com/servers/aix/whitepapers/db_perf_aix.pdf - Probably depends on the use of Direct I/O. Direct I/O is a combination Oracle and file-system configuration that makes cooked filesystems act as if they were raw. Requires unmount/remount of filesystems w/ oracle data, setting of a paramter in oracle db; filesystemio_options=forcedirectio. Highly suggested by Oracle experts to avoid double-buffering of i/o to file systems Other advantages: - Security: the oracle user owns .dbf files, and can easily drop them. You'd have to be root and have a knowledge of the /dev diretory structure to do something similar with a raw disk. - You can move a raw tablespace data file from one partition to another like this: ALTER TABLESPACE AAA OFFLINE; dd if=/dev/rlv01 of=/dev/rlv02 bs=4k skip=1 seek=1 ALTER TABLESPACE AAA RENAME DATAFILE '/dev/rlv01' TO '/dev/rlv02'; ALTER TABLESPACE AAA ONLINE; - backup performance using dd: 10 minutes to backup a 2gb file, versus twice the time to do a tar or cpio. --- Q: Why are file-system based tablespaces so prevalent in Oracle, despite the fact that raw disks are known to be faster? A: Probably because Oracle for along time had no native backup tools, and thus backups were done directly on the tablespace datafiles. With a raw environment, DBAs would have to use "dd" whereas in a file system, the .dbf file is a regular file and can be rolled into regular fs-based "dump" routines. In more modern systems, backup tools exist that can backup "raw" partitions. However, the "comfort factor" of dealing w/ "files you can see" by doing an ls usually outweighs the performance improvements available. Also, administrators can just move the ".dbf" files from one filesystem to another for quick load balancing. Its possible to move raw devices, but requires indepth knowldege of the dd tool. --- Q: What are the default Oracle recommended /etc/system Shared memory parameters? What are the ramifications to leaving these at default? A: (see doc id 15566.1 at Metalink) These are mins and defaults (more sane recommendations) for common /etc/system parameters. Defaults taken from the Oracle install documents, as are minimums minimum, default (recommended) (shm == shared memory settings) set shmsys:shminfo_shmmax=?, 4294967295 (4gb as max for 32-bit systems, larger for 64-bit) set shmsys:shminfo_shmmin=1, 100 (1) set shmsys:shminfo_shmmni=100, 100 (100) set shmsys:shminfo_shmseg=10, 100 (10-20) (sem == semaphore settings) set semsys:seminfo_semmsl=256, 256 (256 in 9i+) set semsys:seminfo_semmns=256, 1000 (700 in 8.0, 1024 8i+) set semsys:seminfo_semmni=100, 400 (70 in 8.0, 100 8i+, ?? 9i) others seen set but which don't necessarily have Oracle rcmd values set semsys:seminfo_shmmap=?, 100 (?) set semsys:seminfo_semmnu=200 (??) set semsys:seminfo_semume=50 (??) set semsys:seminfo_semopm=? (100) set semsys:seminfo_semvmx=? (32767) Discussion/Definitions (definitions from a Sun kernel page) - shmmax: Maximum allowable size of one shared memory segment. if set to 4gb (and 4gb is more than your physical RAM) then you open yourself up to heavy paging, if shared memory were ever to get that high. Oracle recommends setting it to 4gb in all cases. I prefer to leave it at 70-75% of your physical ram to reserve some "private" memory for the OS. - shmmin: Minimum allowable size of a single shared memory segment (in bytes). Really PAGE_SIZE. - shmmni: Maximum number of shared memory segments in the entire system. (1<<_SHM_ID_BITS). - shmseg: Maximum number of shared memory segments one process can attach. Set to same value as shmmni. - semmsl: 10 + the largest "processes" parameter in any of the initSID.ora files for any of the Databases on the system. 100 is the minimum recommended value. - semmns: Maximum semaphores on the system. 1024 is the minimum recommended value. Sum of the "processes" paramters from all databases running (except the largest one), plus TWO times the largest processes, plus 10*number of database (in a simple model, basically twice the # of processes plus 10) - semmni: Maximum number of semaphore sets in the entire system. (these may not really be necessary) - shmmap: - semmnu: - semume: - semopm: Maximum number of operations per semop call. - semvmx: Maximum value of a semaphore. --- Q: Is there a 4gb limit of addressable memory in Oracle? What is the maximum amount of shared memory allowed? A: There used to be (apparently). And this is driven by the platform. For example, Windows NT did NOT address anything larger than 4gb, despite what database you're on, and more specifically doesn't allow any one program to have more than 3gb at a time. - Oracle 8i (and some versions below): 4gb - Oracle 9i: max memory is now maxint (2^64-1) as long as you're on a 64-bit machine. 2^32-1 otherwise. This is a discussion about the 4gb memory limit in Windows. http://www.brianmadden.com/blogs/brianmadden/archive/2004/02/19/the-4gb-windows-memory-limit-what-does-it-really-mean.aspx --- Q: How do I tell how much shared memory is on my Linux server? How do I modify shmmax for Linux? What is the /etc/system equivalent? A: - cat /proc/sys/kernel/shmmax to see what current Shared memory set at. - edit /etc/sysctl.conf and change the line for kernel.shmmax = 6442450944 and, apparently Linux allows you to do this dynamically? sysctl -w kernel.shmmax= to write in sysctl.conf sysctl -p /etc/sysctl.conf to read/reload the values from sysctl.conf --- Q: How much swap space should I allocate on my database server? A: - Sun recommends 2-4 TIMES your physical ram be allocated for swap space. - In OLD Solaris days, at least 1.5 times RAM was needed. With more modern OSs, you could get by with less. 2-4 times seems excessive. Burleson's recommendation/comments: always do at least == RAM and more if you can. http://www.dba-oracle.com/t_server_swap_space_allocation.htm * If you have between 1 and 2G RAM, you need to configure 1.5x RAM for swap space. * For 2 to 8G RAM, swap space has to equal RAM. * For RAM more than 8G, swap needs to be ¾ RAM. But, Oracle's 11g R2 Linux installation guide says this: http://download.oracle.com/docs/cd/E11882_01/install.112/e16763/pre_install.htm#CHDCEBFF o Between 1 GB and 2 GB of ram: Set swap 1.5 times the size of RAM o Between 2 GB and 16 GB of ram: Set swap Equal to the size of RAM o More than 16 GB ram: 16 GB of swap --- Q: What are some considerations to keep in mind while laying out an Oracle physical installation? What components work best on what kinds of disk configurations? What I/O considerations should I keep in mind? A: - Redo logs: work well on raid 1/0 with small stripe sizes (?? arguable: seen two differing opinions). You don't even need striping if you've got them mirrored (raid-1) w/ lots of write cache. Filesystem based since they're sequential reads. Should be separated from data files. If you multiplex your redo logs (as you should), make sure the group members are on different i/o devices or you'll totally choke your performance. - Rollback tablespace/undo tablespace: raid 1/0. Random access, good on async i/o raw disks. Probably can share redo log space, probably can share tempTS space in very serialized processes. Ideally, has its own I/O device. - datafiles for write intensive/oltp user tables: raid 1/0, raw disks for asynch - datafiles for read intensive/dss user tables: raid 5, raw disks over fs. - tablespaces for indexes: smaller extent sizes, separated from their tables (not for performance necessarily, but for administrative purposes). If index tablespace gets corrupted, simply offline, drop and recreate index tablespace. Same considerations as underlying tables (oltp versus dss) inre raid 5 or raid 0/1 (see later discussion regarding tables and their indexes). Raw disks over fs. - archive logs: high write, raid 0/1, should be file system based. - temp tablespace: make sure its on a separate disk than your other regular table spaces. Temp is sequential, works well on filesystems. - MVs: if they're read only, raid 5. If they're refresh immediate on oltp tables, raid 0/1. - /oracle binaries: internal disks, 0/1 for safety - utility scripts: internal disks 0/1 for safety - backup files: raid 0/1 for speed of backup performance Hints: - Use many extents; no gain by using one large extent. Strongly recommended to use uniform extent sizes in db pre 9i. now 9i and greater you can "autoallocate" extents of different sizes as needed. - Tables efficient w/ small block sizes. Calculate your avg rowsize and make sure the block size makes sense (enough to put more than at least one row per block) - Indexes efficient w/ larger block sizes - put the most frequently accessed tables/information on the "outside" tracks of your discs ... its the fastest part of the disk. Summary: Raw: data, index, rollback/undo (arguable) Filesystem: redo, archive, temp raid 0/1: redo, rollback, oltp tables, oltp indexes, archive logs, $ORACLE_HOME, temp raid 5: dss tables, dss indexes, non-refreshable MVs June 27,2000 Ask Tom Answer to same question: Here is what I like (raid 0 = stripes, raid 1 = mirrors, raid 5 = striping+parity): o no raid, raid 0 or raid 0+1 for online redo logs AND control files. You should still let us multiplex them ourselves even if you mirror them. We have more opportunities for failure if the raid subsystem reports a "warning" back to us -- if we have multiplexed them -- we are OK with that. o no raid or raid 0 for temporary datafiles (used with temporary tablespaces). no raid/raid 0 is sufficient. If you lose these, who cares? You want speed on these, not reliability. If a disk fails, drop and recreate temp elsewhere. o no raid, raid 0 or raid 0+1 for archive. Again, let us multiplex if you use no raid or raid 0, let the OS do it (different from online redo log here) if you use 0+1. o raid 0+1 for rollback. It get written to lots. It is important to have protected. We cannot multiplex them so let the OS do it. Use this for datafiles you believe will be HEAVILY written. Bear in mind, we buffer writes to datafiles, they happen in the background so the poor write performance of raid 5 is usually OK except for the heavily written files (such as rollback). o raid 5 (unless you can do raid 0+1 for all of course) for datafiles that experience what you determine to be "medium" or "moderate" write activity. Since this happens in the background typcially (not with direct path loads and such) -- raid 5 can typically be safely used with these. As these files represent the BULK of your database and the above represent the smaller part -- you achieve most of the cost saving without impacting performance too much. Try to dedicate specific devices to o online redo o archive o temp they should not have to share their devices with others in a "perfect" world (even with eachother). --- Q: What I/O contention issues exist with the main Oracle Components? A: the major components and their I/O contention issues are as follows: o system and control files; no real i/o contention issues, sparsely written to. o tables and indexes try to separate tables and their indexes. Especially separate heap tables and the index. At least different tablesspaces, if not different i/o devices. Reasons: less for performance, more for differing needs, plus the ability to drop/recreate the index tablespace due to corruption. In fact, arguments can be made that from a performance perspective, tables and their indexes are accessed in a completely serialized manner, and thus they can share the same I/O device. o tables and MVs based on those tables Should be separated from data/indexes if you have refresh immediate turned on, else refresh on demand can share space with data files. o archive logs Should NOT be on same disks as redo logs, since they only get spawned when a redo log is filled up, which implies that active writing to the redo log is occuring at that time. Guaranteed to o redo logs See archive logs. Also, contention with temp. o temp tablespace Causes i/o contention with rollback/undo since both are actively written to during large transactions. Also has issues sharing space with redo, since both are direct writes and need speed of access. o undo tablespace See temp above. a lesser priority to keep undo away from other objects than with redo, archive and temp. Priorities of objects to keep apart in environments with few I/O options: 1. Online Redo from Temp 2. Archive logs from redo 3. Temp from Undo 4. Online redo from Undo 5. Tables and their derivitave MVs 6. Tables and Indexes on separate disks 7. system/control files on separate disks --- Q: What are the issues involved with separating tables and their indexes? Should tables and indexes be on separate tablespaces? Is it really necessary? Is this an Oracle Myth? A: Discussion resulting from mass Oracle-L discussion 4/23/04, plus several web resources. More discussion 12/14/04 Yes its a myth because: - If you query a table that has an index, the engine reads the INDEX first to get the row ids, THEN reads the table in a serial fashion. So, the two objects are accessed serially thus can be in the same location. - Because the most frequent index access methods are Unique and Range scans, which are random accesses of b*tree leafs, which are by nature also serialized single-block reads. However, it is noted that keeping the objects in separate tablespaces for administration and recovery is always a good idea. But no separate need to move them onto different I/O devices for performance. ?? Does this make sense? If i'm doing high OLTP and am updating data and indexes contantly, wouldn't that create an I/O bottleneck? --- Q: Can Oracle objects be created on NFS-mounted Drives? A: Generally, NO for older databases and plain old unix NFS. then for several years the answer was, "No, With the exception of NetAppliance devices, NFS is NOT supported by oracle for ANY database objects." A quote from Tech Support: Oracle doesn't support writes to NFS file system with exception of some vendors product like NetAppliance. RMAN needs confirmation of successful writes which is not the case for NFS, otherwhise backups would be not trusted. Hence, database recoverability would be compromised. Even using NFS mounts for archive logs, rman-run backups and the like are not supported, and generally can cause errors in the database. The only possible exception is/are Read-only tablespaces, since these are static, cannot be changed and have no recoverability issues. Update! Not sure if its b/c of 10g or not but testing mid-2007 showed the capability of at least dumping of objects to nfs mounted devices works! Not sure which option was the key but nfs options used (and their definitions) were as follows: mount options used: rw,bg,intr,hard,timeo=600,wsize=32768,rsize=32768,xattr,dev=4f00010 deciphered (from man mount_nfs) rw: read-write bg: retry in the background intr (default): allow keyboard interupts for hung processes hard (default): continue to retry requests until server responds timeo: timeout value in tenths of a second (default 11 for connectionless, 600 for connection oriented) wsize: write buffer default 32k rsize: read size buffer default 32k xattr (default): allows for extended attributes dev=? Kevin Closson (Oracle employe and NFS researcher) has a blog with information. http://kevinclosson.wordpress.com/kevin-closson-index/cfs-nfs-asm-topics/ also see: http://kevinclosson.wordpress.com/2007/06/14/manly-men-deploy-oracle-with=-fibre-channel-only-oracle-over-nfs-is-weird/ Mid 2008: Now, many SAN vendors are authorized to do nfs mounted oracle devices. Metalink doc id: 236826.1 --- Q: What are tips for configuring Oracle over Netapp Filers or other types of NAS/SAN machines? Q: What is the definition of NAS/SAN and what are the differences? A: First, Definitions: SAN: Storage Area Networks: Clariion, raid discs, sparc storage arrays, etc NAS: Network attached storage: NetApp filers, Celera examples. - NAS are file based, stand alone machines that are accessed via NFS over tcp/ip networks. Update though: scsi over IP could help make NAS machines a much better alternative. Typically cheaper but slower. - SAN is block based, direct connected storage devices typically running via a fiber 100mb line. Often faster, but typically more expensive. Generally faster b/c its directly connected and does not depend on NFS. --- Q: How do I prevent oracle from automatically starting on my pc? A: Turn all the services to be "manual" from automatic. Start-> control panel -> Administrative tools -> Services and turn off anything that starts w/ oracle that's set to Automatic. - OracleMTSRecoveryService - OracleoraHome81TNSListener - OracleOraHome92Agent - OracleOraHome92HTTPServer - oracleOraHome92TNSListener - OracleServiceBOSSSID or: oradim -edit -sid -startmode m ---- Q: How do you fix the Oracle 8.1.7 (or 8.1.6, 8.1.5) installer bug on Pentium 4 machines? A: Short of patching the Cd, copy the contents of the install CD to a temporary directory, search for symcjit.dll, rename to symcjit.old, and run install\win32\setup.exe. However, in practice even this didn't work... --- Q: What can you use the svrmgrl to do in 8i? A: Type "help" at prompt and get these options: - startup - shutdown - monitor - archive log: use list to see the status of the Database logging mode. - recover - connect - disconnect - set - show - exit - rem - or, just execute sql statements --- Q: What is the best way to upgrade your Server? How do I upgrade a database? Upgrade approaches. A: Possibilities: - Migration utility: some dbas report issues with the tool, can only go from the terminal release to the current (i.e., if you're at 8.1.6 and the terminal release is 8.1.7, you cannot use the migration tool to go straight 8.1.6 -> 9.2) - export files: time consuming, fraught with inconsistencies, requires lots of free disk space. - See "recovery" section, follow a typical disaster recovery method, involving cold backups and basically recreating the server. - --- Q: How can you rig tnsping to look for Oracle Listeners on ports besides the default 1521? A: You'd have to modify your tnsnames.ora file apparently. tnsping resolves the server its "pinging" not from DNS, but from tnsnames.ora. --- Q: How do I change the port that an Oracle server is configured to use? A: easy: - edit $ORACLE_HOME/network/admin/listener.ora - get into lsnrctl, stop and start the listener - modify all tnsnames.ora files (on server and on all clients) to use new port number --- Q: Can I re-direct pfile and/or spfile? A: yes: create pfile='/tmp/initedw12_cedar.ora' from spfile; create pfile from spfile='/tmp/myspfile.ora'; create pfile from memory; --- Q: What are some things you should immediately do following an installation? What are configuration parameters that are known to need tuning out of the box? What are some common post-installation parameters to modify? A: (these are arguable of course, but most are probably accurate) - configure undo tablespace size vis-a-vis undo_retention parameter. Use formula: retention_size (in seconds) x block size x avg transactions per second == undo tablespace size. Create new undo tblspace, set undo_tablespace = NEW, drop old tablespace, drop dbf files. - Increase default temp TS size. - add new redologs, increasing size from default 100m files (done in dbca) - Drop unnecessary tablespaces (users, tools, indx) - Wipe out example users and example tablespace (many users): better to just de-select them in dbca. - alter database backup controlfile to trace; and save off the file for DR purposes. - create pfile from spfile and get a backup copy - These should all be correctly set during install (by dbca or by dbassist). * db_name (the SID) * control_files: though you should consider spreading them around from default areas. * instance_name (usually the SID) * background_dump_dest: $ORACLE_BASE/admin/SID/bdump by default * core_dump_dest: $ORACLE_BASE/admin/SID/cdump by default * user_dump_dest: $ORACLE_BASE/admin/SID/udump by default * pga_aggregate_target: max memory available to Oracle: target between 50-75% of system ram * db_cache_size: make match pga_aggregate_target * java_pool_size * large_pool_size * shared_pool_size * hash_area_size * 2k, 16k pools: configure if you want them * keep, recycle: proabbly want these buffer pools available * db_bLock_size: defaults to 2K on oltp installs, 8k on DW installs. * db_file_multiblock_read_count; defaults to 32 w/ DW option, 16 otherwise. You want to tune this based on your data access. * open_cursors: defaults to 150 or 300: set appropriately * processes: defaults to 300: might want more * sort_area_size: defaults to 1M - aq_tm_processes: set to 0 or delete the parameter, else spurious AQ processes occur. --- Q: What is a good rule of thumb for what percentage of a machine's RAM to use for Oracle Shared memory? A: Old recommendations: 30%, 50%, 70-75% of physical ram: why were these so low? Perhaps because in the old days, O/S were far less efficient. Modern answer: Save between 1-4gb of RAM for the O/S kernel (a bit more if there's other apps running on the box besides the DB) and allocate all the rest to SGA. http://docs.oracle.com/cd/B28359_01/server.111/b28274/memory.htm All this says about the issue is this: 7.1.6.3 Allow adequate memory to individual users When sizing the SGA, ensure that you allow enough memory for the individual server processes and any other programs running on the system. Put another way: SGA > RAM == swapping SGA + User Sessions > RAM == swapping SGA + User Sessions + Box services > RAM == swapping SGA + User Sessions + Box services < RAM == good http://www.dba-oracle.com/art_dbazine_ram.htm is a Don Burleson somewhat dated article where he recommends reserving 10% of RAM on Unix and 20% on Windows. But he points out that wasting RAM is a sin. --- Q: What is the implication of MAXDATAFILES, db_files parameter and the OS limitation on open files when creating a database? A: Difficult to get a straight answer, but these points seem to be true: - maxdatafiles is set during installation (dbca) and is capped by the OS limitation of open files per process. In solaris, this is seen by ulimit -n, but is really controlled by /etc/system kernel parameters rlim_fd_cur and rlim_fd_max. The maxiumum in 64-bit OSs is usually 64K or about 65536. - maxdatafiles seems to solely control the size of the control files at boot, since oracle pre-creates "space" in the control files for every possible data file you intend to create. You can always add more files than maxdatafiles is set to initially, just as long as the db_files is set high enough. - db_files limits the amount of open files per instance, and defaults to 200 on Oracle 9i. You can simply change this value and reboot to increase. There is some memory waste if the value is set too highly. - However, per Oracle's manuals, you can even create more files than the OS defined limit by just inreasing db_files. Oracle's DBWn processes can treat open file descriptors as a cache and can close them if unused to stay below the OS limit. --- Q: What is the maximum size for a .dbf file? Q: What is the largest size of a data file in oracle? Q: what are some limits, maximum size, max sizes of objects in oracle? A: exactly 4 million blocks minus 1. 4194303 (4096*1024-1) blocks So, depending on the block size: (table pulled from some Oracle manual some time ago) Table 1-11 Oracle File Size Limits File Type Maximum Size in Bytes - Datafiles where db_block_size=2048 8,589,932,544 (8gb) - Datafiles where db_block_size=4096 17,179,865,088 (16gb) - Datafiles where db_block_size=8192 34,359,730,176 (32gb) - Datafiles where db_block_size=16384 68,719,460,352 (64gb) - Datafiles where db_block_size=32768 137,438,920,704 (128gb) Update: 10g introduces "bigfile" tablespaces that can have 4,294,967,296 blocks, so a single datafile can be 8-128tb in size (depending on block size). You can also alter tablespace resize .. instead of 9i's alter database datafile resize... 10g updates - A bigfile tablespace with 8K blocks can contain a 32 terabyte datafile. - A bigfile tablespace with 32K blocks can contain a 128 terabyte datafile. --- Q: What do the different options on create tablespace in 10g mean? A: From 10g SQL Reference manual, under "create tablespace" - a bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion (232) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32K blocks and 32TB for a tablespace with 8K blocks. - A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022 datafiles or tempfiles, each of which can contain up to approximately 4 million (222) blocks. 4million of 8k blocks is 32gb, so max size is 32gb of each By default, if omitted Oracle creates whatever is specified as default TS type (which, by default, is the "smallfile" type). --- Q: When is the default tablespace type defined in 10g? A: during database create (dbca). Modifyable dynamically: ALTER DATABASE SET DEFAULT BIGFILE TABLESPACE; --- Q: How can I see what my default tablespace type is? How can I query all my default database settings? A: SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_TBS_TYPE'; --- Q: What is a good block size strategy for my database? (What is a good db_block_size for my database?) A: Oracle provides 2k, 4k, 8k, 16k and 32k block sizes (on Solaris ... other O/S may have 128k block sizes available) The block size for the server must be specified at database create time, thus the DBA must know what types of activities will be going on in the database. Rules of thumb are: - Heavy OLTP: 2k block size (but must be a high contention ... not just inserts) - Heavy Data Warehouse/DSS: 32k block size - *Any* other activity; go with standard block size for your OS (8k on unix, 4k on Windows boxes). I've seen zero need for anyone to use a 4k or 16k server. Some oracle operations seem to be internally tuned to using a standard block size (e.g. drop table took five times as long when the block size was 2k than it was when 8k in our internal tests). There are pros and cons for small and large block sizes: Smaller (2k, 4k): Pros: good for heavy oltp, small rows with random access, helps reduce block contention Cons: block overhead (around 180 bytes) consumes very large % of the block, larger rows won't fit in a single block, immediately introducting chaining and migration issues. Larger (16k, 32k, etc): Pros: Lower % of block overhead compared to block size, so more data can be stored. Allows more rows to be read in a single I/O. Great for Large objects, very large rows, and sequential scan activity. Cons: Wastes lots of memory with small rows. Block contention for index blocks if doing OLTP activity. --- Q: How do you delete a database using dbca? A: log into dbca, then follow the directions to delete a database. Then: - ps -fe | grep SID and kill -9 the existing process - cd $ORACLE_HOME/dbs and delete lk, .ora, orapw files for SID - cd $ORACLE_BASE/admin and rm -rf SID - cd $ORACLE_BASE/oradata/ and rm -rf SID - (missing; deleting the "service" ... how do you do this? ... deleting from dbca a second time catches it) then log back into dbca, delete the database a second time. Now you should be fine. (lesser) - cd $ORACLE_HOME/network/admin, edit sqlnet.ora, listener.ora, tnsnames.ora and remove all instances of SID - cd $ORACLE_BASE/oraInventory/ 7/13/05: actually, this process seems to work pretty cleanly if you've got ORACLE_BASE and ORACLE_HOME properly setup. --- Q: Can you delete a database without using dbca? What steps does the "delete database actually perform? A: ?? It doesn't seem to properly clean out all the files as expected. To fully remove any existance of a database, do the following:' - cd $ORACLE_HOME/dbs and remove all evidence of init files (spfile, pfile) - cd $ORACLE_BASE/admin and rm -rf $ORACLE_SID - remove all .dbf files for the instance - cd $ORACLE_HOME/network/admin and remove the database lines from listener.ora, tnsnames.ora --- Q: How can you tell what physical machine an instance is running on? A: - look in tnsnames.ora (if you were able to connect to it, its real hostname/ip must be in the HOST line of the database entry). - there's no way to tell by logging in. --- Q: What is the difference between the database Identifier (DBID), the "database name," the Service name and the Server ID (SID)? Q: What is the difference between the SID and the Service Name? Q: What is the difference between the Service Name and the SID? Q: what is the unique name of my database? A: - DBID is an internal, unique identifier for a database. Its a 10-digit number that you'll hopefully never need to use. - Database name: Global Database Name, the name you login: scott@dbname. This is in v$database as "name" - Service name: same as db_name, in tnsnames.ora - SID: "Oracle System Identifier" or Instance name: 2nd blank on the dbca name screen, cannot have underscores SID = unique name of the INSTANCE (eg the oracle process running on the machine). Oracle considers the "Database" to the be files. Service Name = alias to an INSTANCE (or many instances, if using RAC). The main purpose of this is if you are running a RAC cluster, the client can say "connect me to RAC_DB", the DBA can on the fly change the number of instances which are available to SALES.acme.com requests (saym, RAC_DB1, RAC_DB2, etc), or even move SALES.acme.com to a completely different database without the client needing to change any settings. SQL> show parameter unique NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_unique_name string eid1_exa_cedar This is the absolutely unique name of a database within an environment, required when you have a common RMAN catalog server that connects to multiple databases with the same name across different envirnoments. for example, lets say you have APP_DB in your dev, qa and prod environments but want the catalog server to back up them all. Well, you'll need a db_unique name defined so that RMAN knows the difference between APP_DB in dev, APP_DB in qa and APP_DB in prod. --- Q: How do you find the DBID of your database A: - run RMAN and connect to the database as your target; DBID is reported upon successful connection. - sqlplus: select * from v$database; - command line: nid target=sys/sys@target: this will get the DBID and print it out before failing b/c the database is open... --- Q: How do you manually install the JavaVM? A: See Metalink doc id 149393.1, which sends you to doc id 204935.1 and doc id 202914.1 . (good luck getting it to run properly though) --- Q: How do you "open a database for migrate" so you can run catpatch.sql? A: shutdown, then SQL> startup migrate then run catpatch.sql, shutdown and re-startup. --- Q: What does it mean when I try to run sqlplus after startup and get: Error 6 initializing SQL*Plus Message file sp1.msb not found SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory A: your oracle environment is not setup on the unix level - ORACLE_HOME must be set - $ORACLE_HOME/bin must be in the path - NLS_LANG *should* be set, but is optional - finally, remember to export all these variables. They can be set, but if they're not exported, the forked process won't inherit the value and won't be able to find the .msb files. - ORACLE_SID as well; if you don't set this, you'll get ORA-07217: sltln: environment variable cannot be evaluated. when you try to start Oracle. --- Q: Since Oracle has many pay-for options that not everyone uses, do you have to be careful when doing an install and specifically NOT install certain pay-for options? (like partitioning, context server, etc) A: No, you can install the entire product suite, but will not be charged for options unless you're actively using them (per oracle-L dba who endured an Oracle audit). Interstingly thought, even when you're not using partitioning, partitioned tables exist. Query dba_objects by object_type. --- Q: Is it better to have lots of small data files, or fewer large data files in your database? A: Arguable, of course. My answer is, i'd rather have fewer, larger data files. Pros: - administratively, doing free-space queries with large numbers of datafiles takes forever - Lots of smaller files means more single points of failiure - lots of datafiles means far more i/o when routine database checkpoints occur Cons: - huge data files make it very difficult to move data files from one f/s to another. - huge data files can make cold backups very onerous (all the more reason to use RMAN, which only backs up used data pages) --- Q: Why are there astericks ("*") in front of parameters in the pfile/init.ora file? A: It means "for all instances." When RAC was introduced, a way to differentiate between multiple instances for the same database was needed. --- Q: How do you do the DDL to set a default value for a column? A: default clause. see this code example: create table default_test (col1 integer, col2 varchar2(5), col3 integer default 1); insert into default_test (col1,col2,col3) values (10,'aaa',99); insert into default_test (col1,col2) values (20,'zzz'); select * from default_test; -- col3 gets a default values when not specifically stated. --- Q: How can I tell if I own a particular feature in my Oracle Product? A: select * from v$option; --- Q: How can I tell when my database was created? A: select created from dba_objects where owner='SYS' and object_name='C_OBJ#'; --- Q: what is the smallest amount of PGA, SGA that an Oracle database will start with? A: 10g: Apparently on solaris 64bit, 84m ORA-00821: Specified value of sga_target 52M is too small, needs to be at least 84M idle> Defaults to 40% of physical ram or 1.6gb SGA, 3.4GB PGA (at least on this test machine). However, setting things to 84m wouldn't allow the db to start. Its obviously higher. --- Q: What should I set my DISPLAY to to run dbca? A: DISPLAY=localhost:0.0;export DISPLAY If you're connecting via ssh, make sure to tunnel Xwindows. Some linux boxes will have issues with this tunnel. --- Q: Can you redirect tnsnames.ora to another file location? A: Yes, just put "IFILE=/alternate/directory/tnsnames.ora" in the top --- Q: How do I remove an installation of Oracle client from my Pc? Remove oracle client? A: - regedit, go to the Oracle section of hkey-local-machine/software and remove all configuration files - edit environment variables and clean out path and any other variables oracle may have populated - delete entire Oracle directory. - reboot, re-install. --- Q: What is HugePages? What does it do? Is there a max? How do I tell what mine is set to? A: HugePages is a feature integrated into the Linux kernel with release 2.6. This feature basically provides the alternative to the 4K page size providing bigger pages. - HugePages can be allocated on-the-fly but they must be reserved during system startup. Otherwise the allocation might fail as the memory is already paged in 4K mostly. - HugePages are not subject to reservation / release after the system startup unless there is system administrator intervention, basically changing the hugepages configuration (i.e. number of pages available or pool size) o Why use it? - HugePages are not swappable. Therefore there is no page-in/page-out mechanism overhead. - Relief of TLB pressure: HugePages will let less translations to be loaded into the TLB when purge the TLB - Decreased page table overhead - Eliminated page table lookup overhead: Since the pages are not subject to replacement, page table lookups are not required. - Faster overall memory performance: On virtual memory systems each memory operation is actually two abstract memory operations. Since there are less number of pages to work on, the possible bottleneck on page table access is clearly avoided. o How do you set it? vi /etc/sysctl.conf an dset the vm.nr_hugepages parameter: vm.nr_hugepages=90000 o Am I using hugepages now? What is my current hugepage set to? $ grep Huge /proc/meminfo HugePages_Total: 90000 HugePages_Free: 67215 HugePages_Rsvd: 13052 HugePages_Surp: 0 Hugepagesize: 2048 kB o Is the size of each hugepage configurable? Yes: grep Hugepagesize /proc/meminfo to see the size. HugePage sizes vary from 2MB to 256MB or even 1gb based on kernel version and HW architecture Default is 2mb. o Is there a max? ?? o What Oracle memory management featuers are compatible with HugePages? - yes: ASMM, automatic PGA management - no: AMM o How should I configure it? - run the hugepages_settings.sh (from the below doc id) to get a recommendation - plan on binding all of your SGA to it and size accordingly. o Do I have to do anything to Oracle? SYS@edw11 > show parameter large_pages NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ use_large_pages string ONLY some good reference documents/links on it: - HugePages on Linux: What It Is... and What It Is Not... (Doc ID 361323.1) - Master Note: Overview of Unix Resources (Doc ID 1498952.1) (replicated info from main HugePages link) - HugePages on Oracle Linux 64-bit (Doc ID 361468.1) - Shell Script to Calculate Values Recommended Linux HugePages / HugeTLB Configuration (Doc ID 401749.1) - https://oracle-base.com/articles/linux/configuring-huge-pages-for-oracle-on-linux-64 - http://grokbase.com/t/freelists.org/oracle-l/08a5dkwd28/hugepages-benefits-drawbacks - http://docs.oracle.com/cd/E11882_01/server.112/e40402/initparams268.htm#REFRN10320: use_large_pages - http://www.pythian.com/blog/pythian-goodies-free-memory-swap-oracle-and-everything/ - http://docs.oracle.com/cd/E11882_01/server.112/e10839/appi_vlm.htm#UNXAR385: 11g DBA guide chapter on it - https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt - http://docs.oracle.com/cd/E37670_01/E37355/html/ol_config_hugepages.html =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Administrative/Operations --- Q: How do I do a "whoami" operation from sqlplus? How do I see who i'm logged in as? A: SQL> show user --- Q: What is the syntax for adding a primary key to an already-created table? A: (question asked b/c syntax apparently has changed recently) old: alter table test modify (col_1 constraint test_cst primary key); new: alter table test add constraint test_cst primary key (col_1); and then, alter table test enable constraint test_cst; (though constraints default to "enabled") --- Q: What is syntax for foreign key constraints? A: alter table child_table add (constraint name_fk1 foreign key (key_id) references parent_table (key_id)); ?? how do you add in the "using index" clause for Foregn Keys? I don't believe you can. -- Q: What is the syntax for a check constraint? A: create table tboss.booleantest (flag varchar2(1)) ; alter table tboss.booleantest add constraint chk_booleantest_flag check (flag in ('Y','N')); insert into tboss.booleantest values ('Y'); insert into tboss.booleantest values ('N'); insert into tboss.booleantest values ('M'); --- Q: What is a key preserved table? I couldn't update through a view b/c one of the tables wasn't 'key preserved.' A: A key preserved table has a view on top of it where the key of the table also serves as the key of the view. Asktom's link about this topic: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:548422757486 or for more examples http://asktom.oracle.com/pls/asktom/f?p=100:11:1600079102388128::::P11_QUESTION_ID:273215737113 --- Q: How do you view the text of a stored procedure? A: (Thanks Danielle Lemos dlemos@goldencross.com.br) select source from user_source where name= --- Q: Do private Synonyms override Public Synonyms? Q: What is the order of resolution of objects? Q: What is the search order of objects in queries in Oracle? A: Yes. When an operation is done on a table, Oracle tries to find the table in this order: (from Chapter 21 of Oracle 8i Concepts reference manual) 1. searching in the current schema 2. searching for a private synonym in the same schema. 3. searching for a public synonym. 4. searches for a schema name that matches the first portion of the object name. If a matching schema name is found, Oracle attempts to find the object in that schema. 5. If no schema is found, an error is returned. --- Q: What does the "looping chain of synonyms" error mean? "ORA-01775: looping chain of synonyms" A: It means that the synonym that you're using to select from points at a table that no longer exists. To diagnose, select * from dba_synonyms where synonym_name='YOURSYN'; on the target database and then look directly at the tables its pointing at. --- Q: Do synonyms dynamically change in an active session if they're modified? A: yes. running CREATE OR REPLACE SYNONYM BI_EDW_APP.testtable FOR RDW.testA; and CREATE OR REPLACE SYNONYM BI_EDW_APP.testtable FOR RDW.testB; while actively logged into bi_edw_app and performing selects after each step shows that the synonyms dynamically change. --- Q: Do table constraints create indexes (as in Sybase and Microsoft SQL Server)? A: - Primary Key constraints: Yes (except when using involving index organized tables) - Foreign Key constraints: No. They merely create "rules" of behavior for the table. An additional create index statement must be issued. This is a potentially huge performance impact, for the parent table will be completely locked during DML (insert, update, delete) to the child table. --- Q: Can I just create a unique index on my field instead of a Primary Key constraint? A: Yes, in that a unique index will do the same as a primary key constraint's associated index. No, in that Unique indexes cannot be referenced by foreign keys elsewhere. --- Q: How can I rename a column in Oracle? A: You can't! However you can use this workaround: SQL> alter table x add (newname datatype); SQL> update x set newname = oldname; SQL> commit; SQL> alter table x drop column oldname; Update; in later versions of the database this is a simple alter table command: alter table x rename column a to b; --- Q: How do I modify a column to be non-null in Oracle? A: easy: alter table abstract modify abstract_num not null -- Q: Followup Question: how do you add a non-null column to a table? A: Answer: add it as null able, populate it, alter it non null. SQL> alter table x add new_col integer; SQL> update x set new_col=somenewnumber; SQL> alter table x modify new_col not null; --- Q: How do you drop a column? A; SQL> alter table test drop column col1; Note that this can have downstream effects on the internal system data dictionary. It can introduce corruption to the database. --- Q: What is the best way to drop a column from a HUGE table? A: SQL> alter table tablename set unused column columnname cascade constraints; SQL> alter table tablename drop unused columns checkpoint 1000; --- Q: Can I shrink a column in Oracle? A: Easy: alter table X modify (columnname newsize); As long as there exists no data that doesn't fit in the new size, this will work very quickly. If you do have data that needs to be truncated in order to fit, you'll have to do a create table as select CTAS and do a substr on the column inquestion. --- Q: How do you get Index usage statistics in Oracle? A: - in Oracle 9i and above: sql> alter index monitoring usage; sql> select * from v$object_usage; ... do some operations sql> alter index nomonitoring usage; Note: you must log in as the OWNER of the objects you're monitoring in order to see usage stats. Also, if you re-issue the monitoring usage command, it clears out the previous monitoring data. - Oracle 8i and below: no built in method. Tom Kyte suggests putting each index in its own tablespace - object_access.sql; an admin script provided by Tom Burleson, which goes through the libarary cache, explain plans all sql existing there and gets index usage stats. Update; 10g the old object_access.sql doesn't work. Might have to dig around to find the code (or buy Burleson's book). 1/22/08: However, have a good process for simulating the object_access.sql. 1. get all the sql_id's for a table 2. spool those sql_ids from v$sqlstats to a dbms_xplan statement 3. spool output of that to a flat file 4. grep for indexes and do wc's Example: there's 7 indexes on the table DPM_CUST_PAYROLL owned by DPM. SQL> spool dpm_plan.sql select 'select * from table(dbms_xplan.display_cursor(''' || s.sql_id || '''));' from v$sqlstats s, v$sqlarea v,dba_users u WHERE u.user_id = v.parsing_user_id and s.sql_id = v.sql_id and s.sql_text like '%DPM_CUST_PAYROLL%' and u.username='DPM' order by s.last_active_time desc; SQL> spool off 2. edit dpm_plan.sql, strip out headers and footers. Get back into sqlplus 3. run the following SQL> spool dpm_plan_20080123.out SQL> @dpm_plan.sql SQL> spool off 4. quit sqlplus, then run each of the following and cut-n-paste output grep -i DPM_CUST_PAYROLL_IDX1 dpm_plan_20080123.out | wc -l grep -i DPM_CUST_PAYROLL_IDX_01 dpm_plan_20080123.out | wc -l grep -i DPM_CUST_PAYROLL_IDX_011 dpm_plan_20080123.out | wc -l grep -i DPM_CUST_PAYROLL_IDX_02 dpm_plan_20080123.out | wc -l grep -i DPM_CUST_PAYROLL_IDX_022 dpm_plan_20080123.out | wc -l grep -i PAYROLL_IDX_03 dpm_plan_20080123.out | wc -l grep -i PAYROLL_IDX_033 dpm_plan_20080123.out | wc -l --- Q: Is there a limit to the number of characters in a column name? A: Yes there is! 30 characters limit on all object names. Geeze. All labels (column names, object names). --- Q: How do I get a list of processes running on an Oracle server? How do I emulate sp_who in Sybase? A: Must select info out of v$session --- Q: Is there a row length limit in oracle (like in Sybase/MS Sql)? Can you have as many varchar2(4000) fields as you'd like? A: Yes, but its so much larger than Sybase's limits it might never be hit. (per Oracle 8 manual: The Oracle limit for the maximum row length is based on the maximum length of a row containing a LONG value of length 2 gigabytes and 999 VARCHAR2 values, each of length 4000 bytes: or: 2(254) + 231 + (999(4000)) = 3,996,739 bytes). Versus 1962 in pre 12.0 Sybase. there are no limits to how many varchar2(4000) fields you can have, either. --- Q: How do I get a list of all my startup values while logged in? A: as user 'sys' sql> select * from sys.props$; sql> show parameter or edit the init.ora file (though this will only get you values that were manually set, and you won't have any idea what the defaults were) --- (from various comp.database.oracle.server posts about Character Sets) Q: How do I know what character set i'm using? A: as user sys select * from sys.props$ where name = upper('nls_characterset'); or look at the parameter NLS_DATABASE_PARAMETERS or select * from v$nls_parameters and find the 'NLS_CHARACTERSET' value or SELECT * FROM nls_database_parameters WHERE parameter LIKE '%SET'; Q: What are the character sets in Oracle? A: to get a listing of the valid character sets in yoru database, run this: select value from v$nls_valid_values where parameter = upper('characterset'); Q: What is the default character set when you create an Oracle server? A: US7ASCII in v7 Oracle and below. (ours seems to be WE8ISO8859P1 in v8 and above. Q: What is the client environment variable that controls the character set? A: NLS_LANG Q: How do I change Character sets? A: update the sys.props$ table and then reboot your server (however, this is unsupported behavior. If you make an error, your server will not boot) update sys.props$ set value$ = upper('your_characterset') where name = upper('nls_characterset'); Confirm the existance of your character set in the nls_valid_values view, and confirm the correctness of your update statement before shutting down and restarting your instance. Supported method: (substitute sqlplus /nolog for svrmgrl in 9i); from http://www.fors.com/velpuri2/NCHAR.txt SVRMGR> SHUTDOWN IMMEDIATE; SVRMGR> STARTUP MOUNT; SVRMGR> ALTER SYSTEM ENABLE RESTRICTED SESSION; SVRMGR> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0; SVRMGR> ALTER SYSTEM SET AQ_TM_PROCESSES=0; SVRMGR> ALTER DATABASE OPEN; SVRMGR> ALTER DATABASE CHARACTER SET ; SVRMGR> SHUTDOWN IMMEDIATE; -- OR NORMAL SVRMGR> STARTUP; (this worked just fine 5/1/04 for me) --- Q: What codeset should you use if you are planning on storing international/multi-language characters? A: Cutting and Pasting conversation from Northrop Grumman Best Possible options include Oracle's AL32UTF8 and AL16UTF16. See Table 5-4 and Table 5-5 of the URL below. http://download.oracle.com/docs/cd/B10501_01/server.920/a96529/ch5.htm#1004835 AL32UTF8 is Oracle's name for Unicode UTF-8 encoding. This character set is valid as the database character set, that is, for CHAR, VARCHAR2, LONG and CLOB data. AL16UTF16 is Oracle's name for Unicode UTF-16BE encoding. This character set is valid as the national database character set, that is, for NCHAR, NVARCHAR2, and NCLOB data. AL32UTF8 is strongly recommended by Oracle for any database that is to be used (now or in future) for global data, e.g. for Internet applications or in multinational companies. With economy becoming more and more global, and Unicode being Internet's character set of choice, AL32UTF8 is generally recommended for all databases. The use of the national character set data types, and thus AL16UTF16, is generally not recommended because of implementation limitations. Some caveats: AL32UTF8 is a variable-width multibyte character set. It encodes many characters, even European, in more than one byte. Applications that were written with single-byte character sets in mind need to be reviewed and possibly corrected to work properly with AL32UTF8. Also, AL32UTF8 requires more CPU cycles to process a string. If your database does a lot of string processing and the server runs at its capacity, you may run into performance problems. But do not let performance considerations decide about the use of AL32UTF8. It is better to spend some money now for some extra hardware than much more money later for a project to migrate an already large database from WE8MSWIN1252 to AL32UTF8, when Unicode becomes a requirement. (text from Oracle Forum) The Unicode standard has been adopted by many software and hardware vendors. Many operating systems and browsers now support Unicode. Unicode is required by standards such as XML, Java, JavaScript, LDAP, and WML. It is also synchronized with the ISO/IEC 10646 standard. Oracle Corporation started supporting Unicode as a database character set in Oracle7. In Oracle9i, Unicode support has been expanded. Oracle9i supports Unicode 3.1. --- Q: Where do I find lists of chr() values for each character for my character set? A: ??? I have no idea; have ended up eyeballing a series of commands like this: select '1',chr(1) from dual; select '2',chr(2) from dual; --- Q: How do you get the ascii value of a particular character? A: select ascii('character') from dual; or if you can't type it: select ascii(substr(transaction_amount,1,1)) from eai_pcard.pcard_data_source_july09xtra; --- Q: How do I grant insert privledges to another user on my tablespace? A: sys only: - grant update any table to mei; - grant insert any table to mei; - GRANT ALL PRIVILEGES TO NEW_USER; gets them all. works from a normal user; table by table. - grant insert on committeeperson to mei; --- Q: What are the different types of Indexes available in Oracle? A: ?? not complete - Binary Tree/B*Tree: default index type in oracle: a binary tree organization of key'd pages. Same as a Sybase non-clustered index. Called a "tree" because of "leaf" structure of keys. Comprised of blocks: blocks are either "branch" blocks (the upper levels of the index) used for searching or "leaf" blocks (lower level blocks), used to hold the indexed data values and one or many rowids corresponding to the data's location. If the index is unique, then only one rowid will be contained in the lowest level leaf block. If non-unique, then a list of rowids for all matching keys is stored in the lowest level leaf block. A query of an index traverses this "tree" of blocks to find the values requested. - Bitmap: differs from the b*tree structure in that a bitmap value for each key is stored in the lowest level of the leaf block instead of a list of resolving rowids. Each bit in the bitmap corresponds to a rowid, and thus if the bit is set, the row w/ the corresponding roid contains the value. As a result, the rowids resolving to satisfy a query can be very quickly computed (bitmap functions merely have to analyze a 0 or a 1 in each bit position, which are boolean functions and are very fast). Advantages of Bitmap indexes: - they are very small, as compared to comperable b*tree indexes (the lower the cardinality, the smaller the index) - they are very fast to resolve queries and outperform b*tree indexes - common in DSS systems or on reference tables that are static/infrequently updated. - If values in a column repeate over 100 times, they are candidates for bitmaps Disadvantages: - updates to a table w/ a bitmapped index will lock every row in the table with a value matching the bitmap'd value. - limit of 30 columns. - Function-based: new to 8i: allows an index to be created (and used) on a sql function. Oracle pre-computes the value of the function and stores it in the index, and uses that value to resolve the query. Ex: sql> create index test_idx on test (upper(col2)); Then, ideally this index would be used to resolve queries like this: sql> select upper(l_name) from test; Prior to Oracle 8i's introduction to function based indexes, a query like this would have table-scanned automatically. - Reverse Key: this index reverses the bytes of each column indexed (except the rowid) while keeping the column order. This has the effect of distributing the insert/update load on the table. However, by doing this, index range scans become impossible (since the leaf blocks are no longer adjacent). sql> create index X on table (col1) reverse; - B*Tree Cluster: - HashCluster - Domain: - Bitmap Join: Not really a different type of index, but named as such: - Compound/Composite/Concatenated: an index on more than one column. Ex: create index X on (a,b). 32 columns max. (creates a B*tree) --- Q: Is it a myth that you have to order your where clauses? A: Yes its a myth with the CBO. There is no advantage in theory, of matching your where clauses to your index key orders. Order by clauses are another question; always make order by multiple columns match existing queries (to save an additional sort operation) However, see the next question... --- Q: Is the order of columns in a composite index important? A: YES. Always put the most commonly accessed or most selective columns first. And then write your queries to match the column order. - If a composite index's columns match a query's columns exactly, the optimizer never has to look at the table. - Make the "leading" fields in a composite index match columns frequently used in where clauses. - Order the fields in a composite index from most selective to least selective (most distinct values to least number of distinct values, or from highest cardinality to lowest cardinality). --- Q: Is there a concept of a "clustered" index in Oracle (ala Sybase?) A: No; all indexes in oracle are a B*-Tree structure by default. Tables are automatically heaps by default. Unless you create the table as an index organized table (IOT). --- Q: What are some additional tasks to be done if using function-based indexes? A: - create the function-based index; eg create index name on table.coulmn(substr(col,4,14)) - analyze table, analyze index - alter session set query_rewrite_integrity trusted; - alter session set query_rewrite_enabled=true; - compatible must be set to 8.1.0.0.0 or greater (obviously not an issue in 9i) - optimizer_mode cannot be set to "rule" (RBO) (hopefully you're not still using this...) Options: - try using a hint to force index, if its not picked up immediately --- Q: What are some additional steps that must be done before bitmaps are allowed? A: Bitmap indexes are not included in "Standard versions" of Oracle, only enterprise. See Metalink note doc id 225688.1 Nov 2015: update I don't think this is right and the above doc id no longer exists. --- Q: How do I tell which Database i'm logged into from sql*plus? A: posted to Oracle-L by 9/10/03 select sys_context ('userenv', 'db_domain') as db_domain, sys_context ('userenv', 'db_name') as db_name, sys_context ('userenv', 'host') as host, sys_context ('userenv', 'instance') as instance from dual ; or SELECT ORA_DATABASE_NAME FROM DUAL; (though this may not be set correctly) ---- Q: what are some "common" Oracle error codes and what do they mean? A: errors in form "ORA-XXX" ORA-00600: corruption in data files: akin to "605" errors in Sybase. ORA-01795: >1000 search terms in the "IN" clause ORA-01461: "string literal too long" >2000 characters inserted into a non-long field ORA-01658: unable to create extent; out of space ORA-00001: Constraint violation ORA-03113: End-of-file communication error: lost connection to server, by reboot or network issue. --- Q: How do I troubleshoot ora-600/ora-00600/7445 errors? Q: What is the metalink note for the Ora-600 lookup tool? A: *** caught up to here. Metalink doc id: 153788.1 --- Q: How can you compare the data contents of two tables? A: Rather difficult. Options: - export data out, sort at unix level then do a diff - create a CRC value for the data row by row, then compare CRC values by joining on the PK --- Q: How do I see what temp tablespaces exist? A: select * from dba_temp_files; --- Q: How do I monitor temp tablespace usage? A: - select * from v$sort_usage where sid = ; - select * from v$tempstat; --- Q: How do I shrink temp tablespace? A: 8i and previous: manual process of creating a new temp tablespace, assign all users to temp_new, offline the old tablespace, drop the tablespace and then delete the associated .dbf data files. 9i: alter database tempfile '/data/oradata/system/temp01.dbf' resize 128M; --- Q: How do I shrink a regular tablespace? A: oldschool: Exp out, drop and recreate tablespace with smaller size, imp in. Shouldn't be any more difficult than that. newschool: SQL> alter database datafile '/array/oradata/STG30DEV/infa01.dbf' resize 10m; see oracle_admin.sql for Tom Kyte scripts to generate the statements automatically for you (only upto 9i, not ported to 10g). --- Q: When resizing datafiles to be smaller, I receive the error: "ORA-03297: file contains used data beyond requested RESIZE value" How can I "defragment" tablespaces to get around this error? A: You cannot; the only recourse is to move tables (and their offending segments) to other tablespaces or drop them. You can try to alter table shrink space Metalink Note:237654.1 has some sql routines to find objects in the tablespace that are beyond where you want to resize. see oracle_admin.sql for code. Note:130866.1 " How to Resolve ORA-03297 When Resizing a Datafile by Finding the Table Highwatermark" should help too. --- Q: Can you flat out drop a datafile from a tablespace? A: Yes of course, as long as there's no data in it. alter tablespace test_ts drop datafile '/u04/oradata/oradev/testts_02.dbf'; --- Q: How do you change the maxsize of a datafile, if its already been created? A: You can't just pass in maxsize by itself; you have to put autoextend on first... example: alter database datafile 'E:\ORADATA\DCSTST2\ts_tstDBF' autoextend on next 1m maxsize 15m; --- Q: What are caveats to working with temp tablespaces? A: - in 9i, you can specify multiple default temp tablespaces for users, and in theory can load balance among them. However, if you "alter database default temporary tablespace tableX" it will reassign all to that tablespace. --- Q: What are good options to "archive" off old data? A: - Partition on your archival column (say entry date in a classic sense), then "archive" the partition to storage tables in an ARCHIVE schema, or by exporting them, or simply truncating the partition. - create table as select CTAS to a new table the rows to KEEP, truncate old table, copy rows back. The advantage of doing this (versus reverse ... simply deleting old rows or selecting rows to delete away) is that the table gets reorganized and your high water mark HWM gets reset. --- Q: What is an index organized table (IOT)? A: A table whose data is not stored as a Heap, but rather in a b*tree index structure. Allows you to specify the storage order of a table per the primary key. Requires a primary key (which specifies out the data will be ordered). Never "table scans" since its already an index. Limitations: cannot be clustered, cannot contain LONG columns. Additionally, you can use key compression to increase the number of rows per page (increasing i/o) and you can use "row overflow" to push "larger" rows to a heap component of the table. syntax: create table (columns) constraint pkname primary key (columns) organization index tablespace ... Note: IOT segments get stored in dba_segments as indexes ... --- Q: Why not ALWAYS use an IOT, on every table? A: Situations to use IOTs: - Documents suggest IOTs for tables with "large, non-key columns" to speed data retreival. Where the table is always accessed via the PK and NOT another field in the table. Eg: an 8 column table w/ 6 of the columns part of a composite primary key. - Table that are SOLELY accessed by the PK (and never any other column) are good candidates as well. Example: table with 3 columns, all of which are in the PK (like a many-to-many resolving table) - IOTs better w/ DSS, but not OLTP apps/tables. Reason: since the table is stored as an index, inserts/updates/deletes will cause much larger fragmentation issues w/ an IOT than w/ a normal table. DSS apps theoretically are much more static, and their pages are much more packed with data. Pros of IOTs - IOTs save space; no need for an index on the table - IOTs provide huge performance increases Cons of IOTs: - loading data into an IOT takes exponentially more time than a heap table (some suggest loading to a normal heap, creating b*tree index matching the PK of eventual IOT, then doing insert into iot_table select .. from). This is because data must be put into the correct spot in an IOT, as opposed to just dumping it on the end of a Heap table. - Buggy implementation: known bug #2974947 in 8.1.7.4 and 9.2? over Solaris (not sure what bug does), some obscure bugs seen in initial IOT release. DML commands cause the "logical row Ids" to be incorrect, causing any secondary indexes on the tables to require frequent rebuiding if you're doing any kind of updating on the IOT. - massive index splitting while loading? Suggested to load into heap tables, then do a "create table as select" CTAS to create the IOT? - IOTs do not support "direct-path" inserts (nonlogged) - IOTs suffer from the same maintenance concerns as Indexes; block splits. Suggest setting pctfree 35%+ to avoid these at all possible. --- Q: what is "overflow" concept used in relation to IOTs? A: A way to manage the size and performance of IOTs by creating a heap-based overflow segment that holds infrequently accessed nonkey columns from the IOT structure. An interesting way to "partition" a table. --- Q: what are X$ structures? A: (primarily from Oracle-L post 9/30/03 by Steve Adams ) 2 types of x$ constructs: - "x$tables" are raw memory arrays that reference directly to the init.ora parameters. x$ksupr is an example - "x$functions" call internal functions to return system info that is not inherently in memory arrays/tabular/memory resident (ala x$tables). x$ksmsp is an example. Both Reside in the SGA and PGA. v$views typically read the x$ tables/arrays to report the relevant system information. They do not take up any actual memory, and thus are usually of very little value to dbas. --- Q: What are nested tables? A: ??? --- Q: How do you tell how much PGA space a session is taking up? A: per oracle-L posting by "ManojKr Jha" 10/7/03 select * from v$process where addr in (select paddr from v$session where sid='<>'); select sum(pga_used_mem) from v$process where addr in (select paddr from v$session); --- Q: How many sessions can Oracle handle? What is the max number of sessions? A: --- Q: How do I emulate an ora-00600 error? A: posted to Oracle-L 10/8/03 by David Lord declare ora600 exception; pragma exception_init(ora600, -600); begin raise ora600; end; / show errors; --- Q: How do I change schemas (ala, "use database" in Sybase)? A: alter session set current_schema=scott; Except that this is not necessarily recommended behavior --- Q: What does DML/DDL/DCL all stand for? A: - DML: Data Manipulation language: insert/update/delete - DDL: Data Definition language: create table, create index, etc - DCL: Data Control language: grant/revoke --- Q: What is undo management (9i feature)? How does it work? A: Undo is the 9i equivalent of rollback tablespaces in 8i and before. --- Q: How do you "switch" undo tablespaces? A: - Create new one: create undo tablespace UNDOTBS2 datafile '/tmp/x.dbf' size 8000M - alter system to new one: alter system set undo_tablespace = undotbs2; - drop the old one: drop tablespace undotbs1; - delete the old .dbf file --- Q: What is Oracle's RDA? A: "Remote Diagnostic Agent," a set of Unix shell scripts developed by Oracle to gather detailed information about Oracle's server environment to aid in problem diagnosis. See Metalink Doc 314422.1 for getting started, download links, etc In unix its rather easy to run; download the tar, put into a directory, run the provided config script, then run the main RDA.sh script, answer its questions, then you have a nice .zip package to ship off. --- Q: How do you run the windows version of RDA? A: --- Q: What is a GTT? A: Global Temporary Table. Works just like tempdb tables in Sybase. Tables are specific to a session, and are deleted at the end of a session --- Q: What is the full list of dbms_ packages? How do I know what packages I have? What do they all do? A: See the Oracle 8i Supplied Packages Reference guide Some common ones: dbms_output: prints strings to the screen dbms_utility: dbms_lob: work with Large Objects dbms_support: dbms_system: dbms_stats: work with statistics: dbms_shared_pool: dbms_snapshot: refresh dbms_job: cron-like facility within oracle dbms_scheduler: 10g replacement for dbms_job --- Q: What is the full list of dba_ tables available? A: SELECT * FROM dba_objects WHERE object_name LIKE 'DBA%'; --- Q: Can Oracle access another database's data files? A: Oracle external procedures? Probably not any more than any other database can access another's data files. --- Q: How do I "move" a table from one tablespace to another? Move table to a new tablespace? A: new feature in 8i. alter table move tablespace However, in 8i and 9i, if your table has a datatype of long or long raw, you cannot move it. Issue still not fixed in 10g or 11g. You'll fail with this error msg: ORA-00997: illegal use of LONG datatype Only way to move it would be to export the table, recreate the table in the new tablespace, and import the table with ignore=y. Other issues with alter table move: - the table will be unavailable during the move - Longs won't move as depicted above; you can exp/imp and redefine the tablespace like this: expdp dbsnmp/dbsnmp directory=data_pump_dir dumpfile=CGRITTON_users.dmp logfile=CGRITTON_expdp.log schemas=CGRITTON drop the table with the long file impdp dbsnmp/dbsnmp directory=DATA_PUMP_DIR dumpfile=CGRITTON_users.dmp logfile=CGRITTON_impdp.log tables='CGRITTON.PLAN_TABLE' remap_tablespace=users:users_enc - tables with CLOBs or LOBs will move, but they'll leave behind an index structure that needs to be altered specially alter table BNJECK.SYS_IMPORT_FULL_01 move LOB(XML_CLOB) STORE AS XML_CLOB (TABLESPACE users_enc); - indexes will be invalidated; You'll have to rebuild them, either in place or into the new TS like this: select 'alter index '||owner||'.'||index_name||' rebuild;' from dba_indexes where status='UNUSABLE'; - Partitioned tables must have individual partitions moved liek t his: select 'alter table ' || table_owner || '.' || table_name || ' move partition ' || partition_name || ' tablespace READIDPLUS_enc_DATA01 nologging;' from dba_tab_partitions where table_owner='READIDPLUS1' and table_name='SCORE_SEND_WORK_RUN_DETAIL' order by partition_name; alter table READIDPLUS1.SCORE_SEND_WORK_RUN_DETAIL move partition APRIL tablespace READIDPLUS_enc_DATA01 nologging; --- Q: Can I use DBMS_REDEFINITION to move tables to new tablespaces? A: Yes, and it has the added benefit of keeping the table available for DML and selects during the move. Process: http://lefterhs.blogspot.com/2009/12/online-table-move.html (assuming you are moving "my_dim_date" in TS1 to "my_dim_date2" in TS2: - Created a test table copy of DIM_DATE called my_dim_date from the RDW schema in EDW1_PALM. - Went into toad, grabbed the DDL for my_dim_date. - Changed the ddl table name to be a my_dim_date2 - But I also had to manually change the constraints and indexes one by one so they wouldn't create a conflict with existing constraints and indexes, and manually edit the target tablespace name to the new desired TS - Run the ddl to create the skeleton version of my_dim_date2 into new TS - (Alternatively, you can replace the DDL capture steps with a CTAS where 1=2, but I'm not sure what that does with indexes, grants, constraints, synonyms, comments, etc. But if you're doing a table with partitions and subpartitions, or LOBs, or IOTs or Longs I'm not sure this is the simplest option either). - Confirm it can be redefined: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('RDW','MY_DIM_DATE'); - Perform the redefine: EXEC DBMS_REDEFINITION.START_REDEF_TABLE(uname=>'RDW',orig_table=>'MY_DIM_DATE',int_table=>'MY_DIM_DATE2'); this makes a MV of the new object temporarily. - Finish the redefine: EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE(uname=>'RDW',orig_table=>'MY_DIM_DATE',int_table=>'MY_DIM_DATE2'); at this point the table is now completely moved to the new tablespace. - Drop the temp table: drop table rdw.my_dim_date2; Basically, DBMS_REDEFINITION is a great option if you have to move a very small number of tables, or one-two very large tables and don't want to incur App downtime. This turning phase 1 project is moving literally thousands and thousands of tables across 20 databases; dbms_redefinition is not a viable option unless you want to increase the workload on your DBAs 20-fold for this effort. --- Q: What does dbms_job do? What is the syntax? A: a builtin cron-esque facility for oracle servers? example: this should run statspack.snap every hour. sql> dbms_job.submit(:jobno, 'statspack.snap;', trunc(sysdate+1/24,'HH'), 'trunc(SYSDATE+1/96,''HH'')', TRUE, :instno); select * from dba_jobs; trunc(sysdate)+(trunc(to_char(sysdate,'sssss')/900)+1)*5/24/60 ought to give you every 5 minutes. Courtesy of Tom... this will remove a particular job sql> exec dbms_job.remove(21); Note, you can only remove a job that's your own. Interesting side note discovered: failed dbms_mview.refresh jobs automatically enter themselves as jobs in the job queue. A refresh will attempt to restart itselve 16 times after failure before stopping execution. --- Q: What is dbms_scheduler? What are main ways to use dbms_scheduler? A: dbms_scheduler (or as oracle docs call it, "the Scheduler") was created in 10g to replace the dbms_jobs package. It can not only schedule jobs to start at certain times but adds the capabilities of having triggering actions kick off certain jobs (something dbms_jobs could not do). Common dbms_scheduler tasks: o create a job BEGIN DBMS_SCHEDULER.CREATE_JOB ( job_name => 'my_new_job1', program_name => 'my_saved_program', repeat_interval => 'FREQ=DAILY;BYHOUR=12', comments => 'Daily at noon'); END; / o seeing what jobs are currently scheduled: select * from dba_scheduler_jobs; o dropping a job BEGIN DBMS_SCHEDULER.DROP_JOB ('job1, job3, sys.jobclass1, sys.jobclass2'); END; / --- Q: There's no repeat_interval listed for a job in dba_scheduler_jobs but its repeating. how is that possible? A: because it was created with a "named" schedule. This means its using a named window. run select * from dba_scheduler_windows; to see what windows are defined. --- Q: What is a state object? A: Essentially, a pointer in the shared_pool/SGA that contains information about a specific database resource. Pmon uses state objects and their references to "clean up" behind such resources when they're closed. Separate arrays exist for process, session or transaction state objects. The sizes of these arrays are configurable, via the processes, sessions and transactions init parameters. --- Q: What is a "pinned" object? A: an object that has been "pinned" to cache. An object can be forced to stay in memory. selec * from v$db_object_cache, look at the "KEPT" column. --- Q: What is a "snapshot?" A: the colloquial term for a Materialized View in older versions (8i,9i). In 10g and beyond, flashback capabiliies have started to change the terminology of what a "snapshot" really is. dba_snapshots still lists information about materialized views, but dba_mviews is a better view for code. --- Q: What is a Materialized View (MV)? A: Similar to a view, except that the data is physically stored the first time its run. Like a higher-performing view. Snapshots/MVs can be read-only or updatable. Commonly used in data warehousing, where aggregations on very large fact tables are frequently done. --- Q: What is MTS? A: "Multi transaction server." Oracle created a separate process to handle a high workload of users and have them share certain server resources. This allows Oracle to scale better, and not consume so much memory at login. --- Q: Is there a concept of Isolation levels in Oracle? Q: What is the default isolation level in Oracle? A: Yes. Oracle provides four levels. - Read uncommitted - Read committed - Repeatable read - Serializable Oracle's default is "Read Committed," which prevents dirty reads, but allows nonrepeatable and phantom reads. It only otherwise allows Serializable; the other two options are not offered. Two good links on them: http://docs.oracle.com/cd/E11882_01/server.112/e40540/consist.htm#CNCPT1312 http://www.dba-oracle.com/t_oracle_isolation_level.htm --- Q: Can you specifically grant "trucate table" privileges to a user? A: It comes with "drop any table" by default. Code a simple stored proc or function to wrap the truncate, then grant execute on that proc to users. You might run into issues with ETL tools that want to truncate tables by default --- Q: How can I get the creation date for a table? For any object? A: "created" field in all_objects select object_name,created from all_objects where object_name=''; --- Q: How do I find my SID? A: select sid from v$mystat where rownum<2; --- Q: Can you change the SID/database name/dbname of your database after dbca creation? A: Yes: use "nid" function. You can also change the DBID. Process: Metalink Note:224266.1 has guidelines as well. - GET A BACKUP. I suggest a cold backup and switching all archive log files just prior to attempting this process. It will require recovery procedures afterward. - connect to target instance via "sys as sysdba" account SQL> shutdown immediate SQL> startup mount - in another window, as user oracle run this to just change the DBID: $ nid TARGET=sys/password@OLDDBNAME dbname=NEWDBNAME SETNAME=yes run this to change the database name. If you do NOT use the SETNAME=yes option, you'll change both the DBname and the dBID. - this brings up a "DBNEWID" interactive program that prompts you to hit yes to change the DBID. I would highly suggest NOT doing this unless completely necessary. - Once complete, via sqlplus do this: sql> shutdown immediate before you startup again, do the following: - create pfile from spfile, change the dbname and instance, and change all existence of the previous dbname underneath $ORACLE_HOME - create new pwd file orapwd file=/u01/product/9.2.0/dbs/newpwd file password=new sys pwd - .... never gotten this step to work ???? orapwd password=Manager file=orapwORAIMPL SQL> startup this will error out, saying you need to "ORA-01589: must use RESETLOGS or NORESETLOGS option for database open" SQL> recover database using backup controlfile; --- Q: There's also a series of steps to do this by hacking init.ora and control files ... what is it? A: never verified; pulled from various asktom here are the steps: 1. restore a COLD backup to a new machine (source dB must have been shutdown normal or shutdown immediate, NOT abort). 1a: ensure all datafiles are oracle writable (at least 640, definitely not 444) 2. Go through all your unix setup files and change your hardcoded ORACLE_SID values 3. edit tnsnames.ora; change db target to new SID 4. cd $ORACLE_HOME/dbs and rename the pfile 5. next, see if the orapwd files is enabled. If so, must issue an orapwd command to modify it for new database name. 2. CREATE CONTROLFILE SET DATABASE RESETLOGS ARCHIVELOG .... 3. ALTER DATABASE OPEN; or ALTER DATABASE OPEN RESETLOGS; or (from oracle-l discussion): You need to shutdown the database cleanly startup Alter system switch logfile alter database backup controlfile to trace reseltlogs shutdown get your trace file from the udump directory - a good hint here is to empty this of trace files when you've done the f irst close, otherwise it can be like looking for a needle in a haystack, rename it to ccf.sql (as per Oracle instructi ons) Delete everything in the trace file until startup nomount. Change the CREATE CONTROLFILE REUSE DATABASE "EOC" RESETLOGS to CREATE CONTROLFILE SET DATABASE "EOC06" RESETLOGS # out RECOVER DATABSE USING BACKUP CONTROLFILE rename any old controlfiles so they don't exist to the database edit the init.ora and change EOC to EOC06 then SQLPLUS /nolog connect sysdba @ccf.sql If you get errors shutdown, startup and alter database open resetlogs depending on the version it will want to create a new tempfile but it should reuse the file so this shouldn't be a pro blem. --- Q: How do I create a new password file (required by dbnewid) A: orapwd file= password= entries= -- Q: How do I allow one user to create objects in a new tablespace? A: A series of steps: As sysdba, sql> alter user newuser quota unlimited on new_TS then, each individual object creation priv must be individually granted... sql> grant CREATE SESSION, CREATE TABLE, CREATE VIEW, CREATE TRIGGER, CREATE PROCEDURE, CREATE SEQUENCE to newuser In order to have userA be able to grant these privs to userB, userA must have been granted all these same permissions "with admin option." --- Q: How do I see what users have quota on what tablespaces? A: select * from DBA_TS_QUOTAS order by username; --- Q: I got ORA-01720 error when attempting to grant select on a view to another user. How do I fix this? A: In order to have userA be able to grant select on an object to userB, userA must have been granted select "with grant option" on that object. This allows them to then give cascading grant permissions to other users. So if you're creating a view in UserA that references tables in UserB's schema, then in order to grant select on that view to UserC then you must have been given grant select on the UserB table with grant option (as UserB or DBA): SQL> grant select on UserB.DIM_DATE to UserA with grant option; Then create view as UserA then you can (as UserA) SQL> grant select on UserA.view to UserC; and you won't get the ORA-01720 error. --- Q: How do you tell what users have been granted quota on a particular tablespace? A: select * from DBA_TS_QUOTAS; --- Q: What is a good way to see if one process is blocking another? A: Query dba_blockers, dba_waiters --- Q: How do you kill a user session? A: alter system kill session 'sid,serial' where sid and serial are two numbers from v$session of the session you wish to kill. --- Q: How do you drop a user/schema A: drop user name cascade. HOWEVER: cascade will drop any object that depends on your schema's objects. This can be dangerous. --- Q: Does drop user cascade also drop referenced objects if owned by another schema? A: ?? --- Q: Is there any way to get a preview of what "drop user X cascade" will actually do? A: ?? -- Q: What does "create force view xxx" do? A: this "forces" the creation of the view even if the underlying tables/columns do not exist. --- Q: What is consequence of running "shutdown immediate" versus "shutdown" or "shutdown normal?" A: The different levels of shutdown: - shutdown: All new user connections are blocked, but the server will not shut down until all user connections end and transactions finish. (also known as "shutdown normal") - shutdown immediate: All new user connections are blocked, all existing user connections are (attempted to be) terminated, and the server rolls back any uncommitted transactions. - shutdown transactional: similar to shutdown immediate, but allows transactions to finish (prevents any new ones from being started, and prevents new user connections) - shutdown abort: instantaneous shutdown, and will guarantee instance recovery procedures to occur. --- Q: How do you get oracle to read from/write to external files? A: via the utl_file dbms supplied package. - set utl_file_dir in init.ora; this required a reboot - make sure oracle can read/write the directory as needed run code something like this: declare LFileHandler UTL_FILE.FILE_TYPE; LFileHandler := UTL_FILE.FOPEN ('/tboss/logfiles','test.file', 'w'); UTL_FILE.PUT_LINE (LFileHandler,'hello world'); UTL_FILE.FCLOSE (LFileHandler); --- Q: How do you *really* stop ORA-01555 errors from occuring? Is there a "perfect" undo_retention size that will forever eliminate ORA-01555 errors? How do you properly size undo tablespace? (Note: ORA-30036 is essentially the same error in 10g). A: There is no silver bullet. Even when you have undo_retention and undotblspace sized exactly as the manual suggests you are still susceptable to 1555 errors. (the formula is: (undo_retention size * blocksize * avg transactions per second == undo tablespace size) Best answer is to avoid 1555s is to write good code. And do the following: - Commit less frequently. Frequent commits will create more undo blocks that are susceptable to being overwritten if a transaction runs too long. - Don't fetch across commits; this is a bad programming technique anyway. - increase undo_retention. - SELECT * FROM v$undostat order by maxquerylen desc; the MAXQUERYLENGTH column is the longest running transaction since server start (in seconds). TXNcount divided by 36000 seconds (10 minutes) will give an approximate transactions per second. --- Q: How are database links created and used? A: create public database link x.com connect to user identified by pwd using 'databasename from tnsnames.ora' select * from table@linkname ---- Q: I accidentally deleted an underlying .dbf file without dropping its associated tablespace. How do I just get rid of the tablespace altogether? A: alter database datafile '/raid/oradata/CHARLIE/example01.dbf' offline drop; system@dw30prd> alter database datafile '/data/tts/staging_holding_01.dbf' offline drop; system@dw30prd> drop tablespace staging_holding including contents and datafiles; Q: Upon startup, i'm getting errors that indicate an underlying datafile is missing, and my database won't start. How do I just drop the tablespace? ORA-01157: cannot identify/lock data file 17 - see DBWR trace file ORA-01110: data file 17: '/u04/oradata/stg30tst/staging_holding2_01.dbf' A: follow these steps: sql> startup mount; sql> alter database datafile '/u04/oradata/stg30tst/staging_holding2_01.dbf' offline; sql> alter database open; sql> drop tablespace staging_holding2 including contents and datafiles; If the second statement gives you this error, sql> drop tablespace staging_holding2 including contents and datafiles; sql> Alter database datafile offline drop; then sql> alter database open; sql> drop tablespace staging_holding2 including contents and datafiles; --- Q: How do you move a datafile from one filesystem to another? A: Very straightforward: (from several posts in c.d.o.s over the years) ALTER TABLESPACE foo OFFLINE; copy the file with the operating system ALTER TABLESPACE foo RENAME DATAFILE '/old_path/toto.dbf' to '/new_path/toto.dbf'; ALTER TABLESPACE foo ONLINE; --- Q: How do you move the system tablespace? A: Now; how do you do this for the system or sysaux tablespaces? You can't take them offline, so you have to shutdown database. Process: shutdown copy the file with the operating system startup mount -- which reads ctrl files but doesn't mount files ALTER TABLESPACE foo RENAME DATAFILE '/old_path/toto.dbf' to '/new_path/toto.dbf'; alter database open; --- Q: How do you move control files? A: You can't alter database or alter system, you have to change the init parameter telling the database where they are. shutdown immediate cd /u01/app/oracle/oradata/ORARPT mv control01.ctl /u01/oradata/ORARPT mv control02.ctl /u02/oradata/ORARPT mv control03.ctl /u03/oradata/ORARPT create pfile from spfile; vi initORARPT.ora and hand-modify the *.control_files variable to have the new directory locations log out, log back in to the sqlplus window as sys as sysba create spfile from pfile; startup --- Q: Is there any harm to leaving tablespaces offlined for extended periods of time, say to gzip the data file and store it away? A: no; this sequence works just fine: - alter tablespace XX offline; - gzip the datafile at the os level - move the datafile to another directory - shutdown,restart the database - gunzip, move back the datafile - alter tablespace XX online; back online with no issues. --- Q: How do I move tablespaces en-masse from one database to another? Can I copy a datafile from one database to another? A: No, you can't, unless you use transportable tablespaces. Just copying a datafile from one database to another, and modifying the controlfile to read the file will result in this error: ORA-01503: CREATE CONTROLFILE failed ORA-01159: file is not from same database as previous files - wrong database id ORA-01110: data file 9: '/data1/oradata/STG30UAT/movetest01.dbf' However, if you use the transportable tablespace option, you should be able to move a tablespace from one database to another. Steps: - src db: alter TS readonly, exp with transportable_tablespace=y - ftp/copy datafiles and exp.dmp files to new location - tgt db: imp with transportable_tablespace=y, alter TS read-write --- Q: Can you prevent a user from alter table their own tables? A: NO, you cannot. Best thing to do is never give users the ability to log in as schema owners. Duh. --- Q: My "alter tablespace xxx read only" is taking forever? Can I speed it up? A: Yes: do select * from tables within the tablespace, this reads the blocks into memory and makes it easier for Oracle to status each block and ensure there are no open transactions. However, there's a known "feature" in oracle 9i that prevents the read only attempt to hang on *any* unresolved transactions, whether or not they're transactions IN the attempted tablespace. Annoying. Especially with toad users who have autocommit off, since an implicit "begin transaction" is issued with each sql window and thus they represent uncommitted transactions. --- Q: How do I diagnose and troubleshoot deadlocks? A: Join v$session to dba_objects on row_wait_obj# (see oracle_admin.sql for example query). --- Q: How can I specifically force deadlocks for testing? A: Create a simple table with two rows. Select * from row1 for update in one session, do the same for row2 in a second session, then reverse. Example: In session 1: select * from isotest where id=1 for update; In session 2: select * from isotest where id=2 for update; now run these: In session 1: select * from isotest where id=2 for update; In session 2: select * from isotest where id=1 for update; Within a few seconds, one of the two sessions will break with the following error: ORA-00060: deadlock detected while waiting for resource While the other session will continue to wait for a commit to be issued in the deadlocked session. -- Q: How can I tell if my server has booted from pfile or spfile? A: show parameter spfile ... if its populated, its booting from it. --- Q: How do I create an "OPS$" type account? or an "identified externally" user. A: (from Tom Kyte) First set init.ora value os_authent_prefix to be "ops$" sql> create user ops$tkyte identified externally; This lets you log in by just typing $ sqlplus / --- Q: How do I automatically start Oracle upon server boot? A: - In unix, you need an RC script installed in /etc/rcX.d or /etc/init.d (depending on your unix flavor). - For Windows servers , its all about the services ... if you go to your services admin screen (start->settings->control panel-> Administrative tools-> services on W2k, XP), you can have your system automatically start oracle by setting the Startup Type to be Automatic for these services: - OracleOraHome92TNSListener - OracleServiceSID --- Q: What are the different commands one can pass to startup? A: - startup: reads the spfile by default and starts the database normally, mounts the database and opens it. (If it cannot find spfileSID.ora, it will look for spfile.ora, then initSID.ora, then fail). startup without any options actually defaults to this command: SQL> startup spfile=/path/to/defaultspfile/ mount open - startup spfile=/path/to/spfile: manually specify a different spfile than default - startup pfile=/path/to/spfile: manually specify a pfile at startup - startup nomount: starts the instance but does not mount or open the database. If you do this, you'll have to issue "alter database mount" and then "alter database open" to get the database started. Used for maintenance issues, typically only during database create. - startup mount: starts the instance, mounts the database but does NOT open it. Used when renaming datafiles, administering redo log files, turning on archive logging, or performing database recovery. - startup restrict: allows sysdba role'd users only. To "unrestrict" the database later, you'd have to either restart normally or issue alter system disable restricted session; - startup force: forces an instance to start, or a method to forcibly shutdown an instance that won't shutdown w/ other normal attempts. - startup open recover: if recovery is known to be needed, issue recover command to start media recovery process. --- Q: How can you tell what OS an Oracle server is running on? A: select banner from v$version; BANNER ---------------------------------------------------------------- Oracle9i Enterprise Edition Release 9.2.0.3.0 - 64bit Production PL/SQL Release 9.2.0.3.0 - Production CORE 9.2.0.3.0 Production TNS for HPUX: Version 9.2.0.3.0 - Production NLSRTL Version 9.2.0.3.0 - Production The "TNS" line will always show what O/S the particular server is running on. Just typing "!uname -a" will only show what your local sqlplus client is working on. --- Q: Is there any way to turn OFF the sysdba auditing that dumps a file into $ORACLE_HOME/rdbms/audit/ora_NNNN.aud everytime someone connects as sysdba? How do I turn OFF auditing? A: set "audit_sys_operations" and set it to false to turn off the os-level files. To turn off all auditing, set audit_tral=none. --- Q: How do you setup auditing in the database? What are good things to audit? A: First things first: shut down database and set these parameters in init.ora: audit_sys_operations=true audit_trail=DB (or OS): turns on auditing and directs the data to either the DB or the OS level. and confirm that audit_file_dest is set correctly. reboot, and auditing will be available. To then audit specific things, look up audit command in sql reference. Examples: audit select any table by biconnect by access; To turn off auditing just use noaudit and reverse whatever was implemented. NOAUDIT ALL; turns eveything offl noaudit select any table by biconnect; reverses the above. --- Q: What's a good starter list of things to audit in a database? A: most of these are high-level drop/create/truncate on the objects listed. audit tablespace; audit database link; audit table; audit materialized view; audit trigger; audit user; audit alter table; audit grant table; /* this gets logins */ audit session; then to query it: select * from dba_audit_trail where action_name in ('LOGON','LOGOFF') order by timestamp desc; --- Q: How would you audit at a table level? A: audit insert,update,delete on system.audit_test; then query dba_audit_object looking at the ses_actions... --- Q: How do I tell what is being audited right now in my database? A: query select * from DBA_OBJ_AUDIT_OPTS; --- Q: How do you get rid of these messages every 5 minutes in the error log? "Restarting dead background process QMN0" A: Causes: - offline'd example TS - dropping of AQ user without properly stopping the service Solution: alter system set aq_tm_processes=0 (which you should be doing anyway, on all Oracle databases, unless you're actually using RAC) --- Q: Is it a good idea to use autoextend on my tablespaces? A: Religious issue. Very strong opinions for and against. Pros for using autoextend - eliminates the "2am phone call" b/c you've run out of space - Eases one major aspect of a development DBA's job (space monitoring and allocation). Cons for using autoextend - using them on rollback/undo, temp or system can end up with a crashed database with very little flexibility to get it back online - developers w/ access to an autoextendable TS can blow out an entire f/s - autoextend "unlimited" can have serious repercussions. Generally speaking, some people feel the use of autoextend is "lazy," while others who make use of it have implemented it in an intelligent way so they can get the best of both worlds (manageability and dba agility). Recommendations: - never use autoextend on system componenets (undo, temp, system) - use autoextend in segments with a max data size, coordinated with paging mechanism when you reach certain thresholds. --- Q: How do I tell what tablespaces have autoextend on set? A: SQL> select file_name,tablespace_name,autoextensible from dba_data_files; --- Q: How do I turn OFF autoextend on a particular tablespace? A: You do it at the data file level, NOT the tablespace level. Get the list of datafiles for your tablespace and find out which are autoextensible. sql> select * from dba_data_files where tablespace_name='USERS'; Then, alter the database and modify the datafile. SQL> alter database datafile '/u01/product/9.2.0/oradata/dw20tst/users01.dbf' autoextend off; --- Q: Can you change the owner of a table? (chown a table, change owner of a table)? A: No. Not possible. Alternatives: - create table as select - exp/imp - sqlplus copy command - use synonyms instead of actually moving the table. Best way is probably the use of transportable tablespaces: From an asktom posting, follow these meta steps. 1. create user new_user... 2. grant ... to new_user; 3. execute dbms_tts.transport_set_check(...); 4. alter tablespace ... read only; 5. exp transport_tablespace=y tablespaces=... 6. drop tablespace ... including contents; 7. imp transport_tablespace=y tablespaces=... datafiles=... fromuser=old_user touser=newuser 8. create nondata objects in new_user schema 9. [drop user old_user cascade;] 10. alter tablespace ... read write; --- Q: How do you get the version and status of the Data Dictionary components? A: query dba_registry column comp_id format A10 column version like comp_id column comp_name format A30 set pagesize 0 select comp_id, status, version, comp_name from dba_registry order by 1; --- Q: What are some other miscellaneous administrative things that should be done on an Oracle server on a regular basis? A: - Clean out the listener log: perhaps put these 3 commands in cron % mv listener.log listener.log.old (perhaps a datetime stamp here) % lsnrctl stop % lsnrctl start - alter database backup controlfile to trace; run this via a database trigger upon startup every time. This command creates a text script version of the controlfile, suitable for disaster recovery purposes. --- Q: Where are the listener logs? How can you tell where they are from config files? A: generally $ORACLE_HOME/network/logs To tell exactly, open the listener.ora file and look for files in the oracle_home directory for each specific listener you're looking for. --- Q: Are there any built-in scripts to generate the ddl of a schema? A: - $ORACLE_HOME/rdbms/admin/utlrp.sql, which recompiles any invalid objects and utlirp.sql, which will invalidate and then recompile objects. - There are dozens of scripts/alternatives out there to do this otherwise. Toad has a great scheman recompile. - 4/20/04: update! dbms_metadata.get_ddl! gets table ddl. set pagesize 0 set long 90000 select dbms_metadata.get_ddl('TABLE','LD','EHRI20TEST') as ddl from dual; set pagesize 0 set long 90000 select dbms_metadata.get_ddl('MATERIALIZED_VIEW','DIM_ZIP_CD','DMSGR_EX') as ddl from dual; Gets a complete ddl dump of a table, its indexes, storage clauses and constraints. NOTE: this is case sensitive! You must have all three options in all caps. --- Q: How can I recreate the ddl for a user? A: combination of dbms_metadata.get_ddl and dbms_metadata.get_granted_ddl. set pagesize 0 set long 90000 select dbms_metadata.get_granted_ddl('ROLE_GRANT','DW_ADMIN') as grants from dual; select dbms_metadata.get_granted_ddl('SYSTEM_GRANT','DW_ADMIN') as grants from dual; select dbms_metadata.get_granted_ddl('OBJECT_GRANT','DW_ADMIN') as grants from dual; Note! username must be in all caps! --- Q: How can I get the DDL for a tablespace? A: set long 4000 select dbms_metadata.get_ddl('TABLESPACE','TS_NAME') from dual; SELECT DBMS_METADATA.GET_DDL('TABLESPACE',a.tablespace_name) FROM dba_tablespaces a will get all of them. --- Q: Can I mount database A's tablespace file from within database B? Q: Can I add an existing tablespace file to a new database? A: NO, you cannot simply plug in a new tablespace file. This is the "DBID" problem of Oracle; database A has its own DBID, and that dbid is embedded in the tablespace .dbf files, preventing Database B from being able to read the file. Even if you shutdown and hack database B's control file to force it to read the .dbf file, you'll get this error on startup: ORA-01159: file is not from same database as previous files - wrong database id Solution is to 1. export/import 2. transportable tablespace in the .dbf file. --- Q: Is it possible to mount database A's .dbf files in Database B, if I were to change the Sid, DBname and DBID of database B to match Database A's? A: ??? need to test. Test: basically create a dbf file in Database A, then change the dbid in database B to match database A's, then try to read in the file. --- Q: How can you tell the last time a table was accessed? How can you tell the last time a table was modified? What is the last DML date? A: You cannot tell when a table was last accessed without having "audit select on" in the database. select object_name, created, last_ddl_time from dba_objects where object_name='YOURTABLE'; will give you the create and last modify dates. --- Q: Does the order of columns matter? A: Only in very specific cases, namely tables that are being used in a switch partition scheme. --- Q: How can I automatically terminate user sessions after a set amount of time? A: two ways: - modify the profile of the user and add a "connect_time" resource limitation. select * from dba_profiles; for more info. - create a logon trigger that limits the amount of connected time (old school method) --- Q: I start up my database and get an error message like this: WARNING: EINVAL creating segment of size 0x000000011d400000 fix shm parameters in /etc/system or equivalent A: you've got more memory configured in your database server than you have shared memory defined in /etc/system (or whatever OS equivalent). Tune up shmmax or tune down your db_cache/sga values. --- Q: Can I change a schema name? Can I rename a schema? Can i rename a user? Alter user X rename to Y? A: Yes, but, it involves hacking the data dictionary and is unsupported. You could conceivably do this: SQL> update sys.user$ set name='new' where user#=N and name='old'; and then hunt down all synonyms, hard coded usernames in code, views, etc. The PROPER, supported way would be to export out, import in with fromuser'olduser' and touser='newuser' --- Q: I've got block corruption (ORA-01578 errors). How do I fix them? A: See Metalink note 47955.1 for detailed information. Also see Metalink note 28814.1, "Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g" for good diagnosis information Error looks like this; ORA-01578: ORACLE data block corrupted (file # 17, block # 2045941) ORA-01110: data file 17: '/u04/oradata/stg30tst/r3history_staging1.dbf' First, to see what table/schema is affected, run this sql (substitute in the file# and block# from the error message) SELECT SEGMENT_NAME, SEGMENT_TYPE, OWNER FROM SYS.DBA_EXTENTS WHERE FILE_ID = 17 AND 2045941 BETWEEN BLOCK_ID AND BLOCK_ID + BLOCKS - 1; 3 options exist for normal tables: o Restore and recover the database from backup (recommended). o Recover the object from an export. o Select the data out of the table bypassing the corrupted block(s). If you don't have a backup (either rman or export), you can try to do the insert into select * from table but it probably fails. If you're lucky and its an index object, simply drop and recreate index. OR, if you see corruption by doing a select * from v$backup_corruption, you can run this: --- Q: What causes block corruption? How can I be proactive about detecting it? Is there anything I can do to prevent it from occuring? A: Block corruption is caused by (see Metalink Note 77587.1) o Bad I/O, H/W, Firmware. o Operating System I/O or caching problems. o Memory or paging problems. o Disk repair utilities. o Part of a datafile being overwritten. o Oracle incorrectly attempting to access an unformatted block. o Oracle or operating system bug. You can be proactive by doing the following: - ANALYZE TABLE/INDEX/CLUSTER ... VALIDATE STRUCTURE cascade - use dbverivy - Use RMAN to do backups, which detects block corruption automatically Can I prevent block corruption? No, you can only plan ahead for it. --- Q: What command can I run to detect block corruption? A: If you run RMAN, it will populate corrupt blocks in these views: SQL> select * from v$copy_corruption; SQL> select * from v$backup_corruption; otherwise, run these commands: analyze table dmsgr_ex.fact_patient_encounter validate structure cascade; Note: if you get ORA-14508: specified VALIDATE INTO table not found you're trying to analyze a partitioned table and you need to create the expected "invalid_rows" table via @$ORACLE_HOME/rdbms/admin/utlvalid.sql script. bulk methods: select 'analyze table ' || owner || '.' || table_name || ' validate structure;' from dba_tables where owner='DMSGR_EX' order by table_name; --- Q: What are the various options available when creating Materialized Views? What are the impacts/pros and cons for each? A: - Simple MV: a basic MV is created completely static and with no refresh options (or using "never refresh") essentially - on prebuilt table: allows you to convert an existing table to be the "starting point" of a MV. If the MV is dropped, the object reverts to being a normal table. The database does not confirm that the data in the table actually matches the query in the MV though. - build immediate: default, builds the MV immediately - build deferred: instructs Oracle to NOT build the MV but to wait for a refresh demand. Until refreshed for the first time, MV stays in "unusable" state and cannot be used for query rewrite. - for Update: allows the MV to be updated. Used in replication envs. - Refresh Fast: fast refresh only updates MVs with incremental changes done to the base tables. Needs MV logs specifically created, and needs the MV logs in place before the MV is created. Certain restrictions apply (no analytic functions, e.g.). MV Logs can be a severe performance hindrance. - Refresh complete: complete refresh every time, even if Fast refresh is available. - Refresh force: default: will use fast if avail, complete otherwise. - Refresh on demand: default action; only refreshes when specific refresh command is issued (dbms_mv - Refresh on commit: does a fast refresh whenever a transaction is committed on a table referenced in the MV. Performance hit; slows down transactions b/c the fast refresh is done in conjunction w/ the transaction commit. - never refresh: as it says; never allows MV to be refreshed. Can only reverse with an alter materialized view refresh command. - with primary key: default behavior, makes MV depend on the PK constraints of the master tables. Recommended. - with rowid: older method of doing MVs; not recommended, limited capabilities, only use when not all the PK fields of your master table are in the MV. - Query Rewrite enabled/disabled: default is disabled. Anways analyze the MV or else Oracle won't know how to use it in query rewrite. - logging/nologging: Default is logging. - compress/nocompress: Default is nocompress - parallel/noparallel: default is noparallel - cache/nocache: put blocks in the MRU section of the buffer chain. nocache specifies the opposite. options are ignored if you bind the MV to the keep or recycle pools. - scope for: restricts the rows referenceable in the MV by using a scope ref constraint - Reduced Precision: relaxes the loss of precision on certain columns when making the MV --- Q: How do you refresh a materialized view? What are the options in dbms_mview A: exec dbms_mview.refresh('owner.MV_name',method=>'method', parallelism=>0); where - owner.mv_name: can be a comma-separated list, no synonyms, or an array - method is one of C,c (complete), ? (force), f (fast), a (always refresh, same as C) - the rest of the options are more obscure and I'd never set them, use as above. See the manuals --- Q: Are MVs refreshed during alter table exchange partition? A: Yes they are; this mv was refreshed correctly during testing: create materialized view t1_mv build immediate refresh on commit as select * from t1 where col2='DOD'; --- Q: Is it faster to drop and recreate a Materialized View instead of refreshing it? A: Possibly, depending on your version. 9i and previous: a complete refresh was an implicit truncate table, then insert /*append */ 10g: the behavior has changed: A complete refresh now means a delete and normal insert. To restore the 9i behavior, set the atomic_refresh method. like this: exec dbms_mview.refresh('MV_NAME', method=> 'C', atomic_refresh => false); --- Q: Why was the refresh method changed from 9i to 10g to NOT do the truncate? A: The default behavior was changed so that the entire refresh is done in one transaction and thus the data does not disappear from the MV during the time between the truncate finishing and the refresh finishing. Makes sense. --- Q: how do you create a materialized view to refresh automatically? A: use "start with" and "next" clauses create materialized view mv_name refresh complete start with sysdate next sysdate+1 as select * from your table... To get it to refresh specifically at midnight every night, change the start with to be midnight of today ... ---- Q: What are some caveats/tricks to using alter table exchange partition? A: - tables must be created absolutely exactly alike (indexes, columns, null constraints, pk, fk, all ri, etc). - _minimal_stats_aggregation=FALSE in order to get the stats to come over - you can NOT have globally gathered stats on the table! a full dbms_stats job prevented the stats from going in , with the above paramter. Had to delete_table_stats then they would come over just fine. --- Q: What is the "hakan factor" and why does it matter when doing alter table exchange? A: some internal table property related to tables with bitmap indexes. Known bug that causes ORA-14642 when trying to do alter table exchange partitions. If you run this command (with the two tables in the in clause being the partitioned and target nonpartitioned table) and the numbers don't match, you've got the problem. select a.object_name,b.spare1 from dba_objects a, tab$ b where a.object_id=b.obj# and a.object_name in ('F_PAYROLL','F_PAYROLL_2006032') and a.object_type='TABLE'; See metalink note 248634.1 for more detail. Causes for this: 1. Partitioned or non-partitioned table was Compressed or Uncompressed 2. One of the tables has been altered by command: 'Alter Table Nominimize Records_per_block'. 3. Table has been modified by adding Not Null Constraints Bug supposedly fixed in 9.2.0.7, but still seing. see note 3747472.8 10/13/06: update, seems to be Bug 4221789 - ORA-14642 during EXCHANGE PARTITION after DROP/ADD column. Fixed in 9.2.0.8 (Note:4221789.8). --- Q: What are the various status' as listed in v$session? A: from the doc set.. - ACTIVE (currently executing SQL) - INACTIVE (no currently executing SQL but login session still active in database) - KILLED (marked to be killed) - CACHED (temporarily cached for use by Oracle*XA) - SNIPED (session inactive, waiting on the client). IN reality it means that Oracle has killed the session b/c it reached a defined resource limit. --- Q: How do I get a list of all the physical locations of all my important files? A: select * from dba_data_files; select * from v$logfile; select * from v$controlfile; select * from dba_temp_files; --- Q: I want to change my undo tablespace, but when I go to drop the old one I get the error; ORA-30013: undo tablespace 'UNDOTBS1' is currently in use. How do I find out what user has the open transaction that's holding onto my undotbs? A: select * from v$rollstat where status='PENDING OFFLINE'; However, finding the transaction holding that rollback segment is challenging. Note:341372.1 has a series of commands --- Q: Is there a minimum length of a username in Oracle? A: Nope: SQL> create user a identified by a default tablespace sandbox temporary tablespace temp account unlock; User created. --- Q: how do you change users within sqlplus? A: alter session set current_schema = newuser; However, this won't necessarily show you exactly what that user may be seeing in terms of permissions. --- Q: How do you move the "index" on a LOB column? I get this error: ORA-02327: cannot create index on expression with datatype LOB (you might also get an ORA-22868 when trying to drop a TS that has LOB segments of tables located elsewhere) A: ALTER TABLE foo MOVE LOB(lobcol) STORE AS lobsegment (TABLESPACE new_tbsp STORAGE (new_storage)); ALTER TABLE edw_itsa.itsa_openrequisitions_tmp MOVE LOB(REQ_DESCRIPTION) STORE AS lobsegment (TABLESPACE edw_itsa_ix_01); --- Q: What was the old "grant connect" and "grant resource" commands for? Why are they bad? A: These roles continue to exist in 10g and beyond for backwards compatibility. select * from DBA_SYS_PRIVS where grantee='CONNECT'; select * from DBA_SYS_PRIVS where grantee='RESOURCE'; connect simply grants "create session" resource grants create table, procedure, trigger, sequence, type, cluster, operator and indextype to its grantees. Much better to individually grant roles and create privileges as needed. --- Q: What does the command "alter user X default role all;" do? A: It serves to "regrant" any and all roles that have been granted to the user prior to this session. When a role is granted to a user, it is granted as a default role. This alter user statement merely protects and maintains the roles previously granted. --- Q: How do you restrict a user's login to a certain time period during the day? A: Modify the system.log_on trigger and add code like this: if (user = 'DRES_RO') then IF (to_number(to_char(sysdate,'HH24'))>= 9) and (to_number(to_char(sysdate,'HH24')) <= 17) THEN RAISE_APPLICATION_ERROR(-20005,'DRES_RO Logon only allowed outside business hours'); END IF; end if; You can restrict all sorts of other resources (cpu time, memory, etc) by the use of User Resource Plans. See Chapter 24 Using the Database Resource Manager, in the DBA Guide in the manuals. --- Q: How do I fix the ORA-02391 exceeded simultaneous SESSIONS_PER_USER error? A: modify the sessions_per_user limit in the user's profile or assign a different profile select * from dba_profiles where resource_name='SESSIONS_PER_USER'; alter profile default limit sessions_per_user unlimited; =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Windows-Specific Administration Questions/Windows Specific/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- --- Q: How does the Oracle server actually work in Windows? A: Paraphrased From an interesting Oracle-Rdbms yahoo group email 3/31/03 by showrun@yahoo.com: Oracle 8i,9i starts as a Windows service (so that it starts upon boot). Then, the server waits for its first connection to establish the server threads and the SGA. Shutdown doesn't actually shutdown the server; it just terminates all active threads and returns it to the initial state. Additionally, there are two other services/processes started: the Server Manager and the Net8 Listener. --- Q: How can I address more than 4gb on a Windows 32-bit operating system? A: See metalink note 225349.1 /3GB switch in boot.ini allows for Oracle to address 3gb of the memory, leaving 1gb for the OS. But a need arises for Physical Address Extensions (PAE) (also known as Address Windowing Extensions AWE). How to implement: (note, some 32bit OS's already have it enabled, and no boot.ini mods needed) - implement /PAE switch in boot.ini multi(0)disk(0)rdisk(0)partition(1)\WINNT="Microsoft Windows 2000 Advanced Server" /3GB /PAE - USE_INDIRECT_DATA_BUFFERS=TRUE in init.ora follow the instructions in the Metalink note. --- Q: What is the maximum block size of an oracle database created in NT/XP? A: 16k ... an attempt to create a 32k block sized DW instance failed. --- Q: How do you set ORACLE_HOME and ORACLE_BASE in Windows NT/XP environments? Q: Where is ORACLE_HOME in Windows environments? A: Start->settings->control panel, click on System icon Advanced tab, click "Environment Variables" button Click New, add ORACLE_HOME and ORACLE_BASE as appropriate (usually, oracle_base is c:\oracle and oracle-home is c:\oracle\ora92) --- Q: Where is the default $SQLPATH variable for windows servers? A: ?? Not believed to be one; I generally put login.sql and other commonly run sql commands into whatever directory the cmd prompt defaults to (usually c:\documents and settings\username or c:\windows\system32). --- Q: How do you set your ORACLE_SID? A: similarly in unix c:\> set ORACLE_SID=XXX c:\> sqlplus ... there's no concept of exporting variables in Windows. Things should just work. --- Q: What group does a user have to be to be able to log into the Database as sys without knowing the password? A: ORA_DBA (might be a custom group created upon installation) then you can c:\> sqlplus /nolog SQL> connect / as sysdba --- Q: Does the Oracle process on windows keep an open file pointer to its alert.log (Important because backup processes will hang on open files)? A: No it does not: if you rename an alert log file (for rotating log files perhaps), the next time Oracle needs to write to the file, it creates a new logfile name. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Unix-specific Administration questions/unix/ Q: how do you do a recursive grep? A: find . -exec grep texttosearchfor {} \; -exec ls -l {} \; or, to add in file name searching and specific path starting points: find /tmp -name \*ipt -exec grep pall {} \; -exec ls -l {} \; =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Space Management/Storage Management/ASM/ --- Q: What is the difference between a locally managed tablespace (LMT) and dictionary managed tablespaces? A: - Locally managed: Default: auto-allocate: use the "extent management local" option. This supposedly eliminates the DBA's need to worry about the extent sizes and table fragmentation. Extents will be allocated and deallocated automatically. Supposed to be a godsend for DBAs. If you use LMT with automatic space management (ASM), then any storage clauses used on tables created within the tablespace will be ignored. ex: create tablespace test datafile 'c:\oracle\oradata\boss\test.dbf' size 100k extent management local; - Dictionary managed: Old school method: Extent management by the data dictionary. You can convert a DMT to a LMT, but not the reverse. Really?? What is DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_FROM_LOCAL then? - If using DMT, set initial = next extent, pctincrease=0, standardize extent size within the tablespace to avoid fragmentation. --- Q: Is it a myth that setting pctincrease=1 (not 0 as highly recommended) in DMT will cause the SMON process to automatically coalesce free space? A: Yes! Its a common myth that setting pctincrease=1 will prevent fragmentation, when in fact it causes it. ??? Or is it? Some experts don't seem to agree. But, according to the Oracle Concepts manual, this happens automatically when an object extents. Doesn't work w/ temporary tablespaces? --- Q: How do I convert a DMT to a LMT? A: (from Oracle-L post 10/27/03 by Andy Horne ) DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL (tablespacename, unitsize, relative file number) - Limitations: can't be system, offlined, already LMT, temporary (to make temporary LMT, drop it and recreate it) Caveat: this dbms_ package is said to be buggy. Some recommend just creating a new LMT tablespace and alter table move tables over. --- Q: Is it a myth that LMT become horribly fragmented by the constant allocating/deallocating of extents? A: No, in fact a demonstration of this was posted to Oracle-L on 10/13/03 by "Niall Litchfield" that easily displayed how extents were NOT necessarily released as expected. ??? Perhaps his example was too simplistic. Jury out. Perhaps the fragmentation comes from LMTs using ASM (which allocates different extent sizes). Instead, use uniform extent sizes and no fragmenting can occur. --- Q: How do I tell how much physical space a table is using? A: Several ways: - You can count the segments (to get a rough estimate of how much space the server has allocated) dba_segments. - or the extents (to get a finer granularity). dba_extents. - Best way: compute stats, select num_rows * avg_row_len from dba_tables - Even more perfect way: scroll through the table, counting the exact length of each field in each row (amazingly slow though). --- Q: How do I find out my table's High Water Mark? (HWM)? A: 2 ways: - table Blocks used - empty_blocks allocated to the table -1 (see oracle_admin.sql for exact queries). - DBMS_SPACE.UNUSED_SPACE and DBMS_SPACE.FREE_BLOCKS packages. Roll these two package commands into a cursor or stored proc. See hwm_calculate.sql for an example exec dbms_space.free_blocks('DW_ADMIN','REF_ABC_AGE_GROUP','TABLE',1,:tmp); DBMS_SPACE.FREE_BLOCKS('SCOTT', 'CLUS', 'CLUSTER', 3, :free_blocks); --- Q: What is the consequence of the High Water Mark? A: HWM is as it sounds, the highest number of rows that have been inserted into a table. Purposes - Its used as a "boundary" for "full table scans" so the optimizer knows to stop scanning pages. (this can be a huge performance hit if you've got a huge table but small amounts of data in it). - fast data appends know quickly where to append data, without searching for "holes" in the table blocks. Increases performance, loses space. The problem with the HWM is, you cannot delete space below it, ever. Even with an empty table. The only way to clear it is to truncate the table. --- Q: Can you reset the high water mark HWM in a table? A: without truncating or doing CTAS, no. ALTER TABLE table DEALLOCATE UNUSED KEEP integer; but this doesn't seem to free up anything like doing a truncate does. --- Q: What is the effect of "reuse space" on the truncate table command? A: It does NOT return the freed blocks/extents back to the tablespace, but leaves them allocated to the table. does NOT reset the high water mark (hwm). The default behavior of truncate table is to reset HWM and clear space. --- Q: Why not just have one huge tablespace on your Oracle server? Q: Why not have just one big tablespace? One large tablespace? A: Its less about Less for performance, more for management. Religious argument: Pros - 10g promising full administrative control over tablespaces - no more 2gb file size limitations - RMAN can do block-level recovery Cons - administratively, all your server is in one .dbf file - if something runs amok, you fill up your sole TS and the database crashes - still limits on TS upper limits even past 9i --- Q: How can you tell if your tablespace is fragmented? How do you fix it? A: Query dba_free_space; if your tablespace has many many blocks, and they're all very small in size ... you're fragmented. You can try to fix it with "alter tablespace coalesce" but this will only be able to solve issues with adjacent blocks True way to fully uncompress is to export the tablespace full with "compress=n" option, truncate the tablespace, coalese it, then imp with "ignore=y" option. --- Q: Why not just make every field a varchar2(4000) in your database? Why don't I just make every field a huge varchar2? A: - for one, storing dates as strings is an awful idea - for another, storing numbers as strings will corrupt them and force to_number conversions to do simple arithmetic - You can always expand varchars ... but not so much shrinking - You will cost yourself performance at the DB level if you're joining on these overly long varchars - over-sized varchar2 will have downstream effects on developers, who frequently read in arrays to the front-end straight from the database and will quickly max out RAM locally. - over-sized varchar2 fields will corrupt data entry screens, pre-formatted report screens - If you have to dump data to a fixed length format, your files may not be able to handle hugely long rows - With enough long varchar2() fields, you may span the size of your extent and cause row movement/io issues - Best practice argument of just defining columns for what you need But the biggest reason? ORA-01450: maximum key length (3218) exceeded Oracle indexes have a max length (for typical 8k block size, that limit is 3218 characters in 8i, but was expanded to 6398 characters in Oracle 9,10,11. No idea what it is for 12. Tom Kyte's answer to the same question: http://tkyte.blogspot.com/2006/11/see-you-just-cannot-make-this-stuff-up.html Metalink on the ora-01450 https://support.oracle.com/epmos/faces/SearchDocDisplay?_adf.ctrl-state=yimzxsx54_9&_afrLoop=468974492136206 --- Q: What is a good de-fragmentation tool for tablespaces? A: reclaim script on asktom run ADDM; can get some information there. --- Q: How do I tell if a table is fragmented? A: - Some say you can tell if its fragmented by the number of extents a table has obtained. However this is not necessarily a good way; what if all the extents are contiguous? - You can take a look at the row-by-row rowids, or the number of rows per block assigned to a table (see the dbms_rowid function) to truly see if a table is fragmented. --- Q: How do I reorganize a fragmented table? A: Option 1: export and import it; eg: - exp user/pwd@sid file=XXX tables=XXXX rows=n - truncate the table (this resets high water mark HWM - imp user/pwd@sid file=XXX full=y ignore=y Option 2: just move it from one tablespace to another and back sql> alter table X move tablespace Y Note: make sure you check for invalidated objects (stored procs, etc) when you do this. 10g: alter table shrink space --- Q: Should I really be concerned if a table is fragmented? A: Arguable Yes: fragmented free space can cause large amounts of I/O while the system tries to find a large enough free space to insert a new row (if the new row causes an extent allocation) No: typical data access is through indexes, which contain references to the rowids directly and thus table extent locations do not matter. However: if your application code does not correctly access the indexes and depends on Full table scans, then fragmented tables will have large performance effects. --- Q: Does the truncate command de-allocate extents from a table? What advantages does truncate have that drop table or delete * from table does not? Q: Truncate versus delete? What are the differences between truncate and "delete from" table? A: - delete does not readjust the HWM - truncate "readjucts" the HWM, with an option to either "drop storage" or "reuse storage." - delete always reuses storage - delete generates redo log, truncate does not, therefore truncate is faster, but deletes can be rolled back - deletes can use a where clause - truncate does NOT use rollback/redo logs (but it isn't a "non logged" operation) - truncate will not fire delete triggers and won't write to a snapshot log Caveats to using Truncate: - you must be the schema owner (or have create any table privs) - there is no "grant truncate" privilege - truncate does not require a commit and is thus unrecoverable - truncate does not check RI constraints and thus can leave referentially unsound data present --- Q: Is truncate a non-logged operation? Is this an oracle myth? Q: Are truncates logged? A: Yes: they just use minimal redo logging so as to work quickly. https://asktom.oracle.com/pls/apex/f?p=100:11:::NO:RP:P11_QUESTION_ID:5280714813869 http://docs.oracle.com/cd/B19306_01/server.102/b14200/clauses005.htm --- Q: How do I emulate "grant truncate" in oracle? A: write a proc like this and then grant execute on the proc to the user you want to allow to truncate your tables. CREATE OR REPLACE PROCEDURE p_trunc_tab(p_tab_name VARCHAR2) AS v_command VARCHAR2(200); BEGIN v_command :='TRUNCATE TABLE '||p_tab_name; EXECUTE IMMEDIATE(v_command); END; / grant execute on p_trunc_tab to tboss; then as user tboss: sql> exec stg.p_trunc_tab('t1'); and the table stg.t1 will be truncated. Update: does the "drop any table" --- Q: How do I allow one user to create objects in another user's schema ? A: grant "create any table" --- Q: How do you tell if its time to recreate/rebuild an index? A: from comp.database.oracle.server 11/4/98 by nelsona@my-dejanews.com and from Oracle-L discussions 9/21/03. Not very often; as with Sybase, only an exceptional event requires an index to be rebuilt (a bug, disk errors or some other type of physical event). OR if you have a high-delete table (which should have its storage pctused/pctfree values tuned at create). Check your queries: if nested-loop queries seem to slow down without any reason, consider rebuilding indexes. Quantitative ways to tell/rules of thumb: - select height, del_lf_rows from sys.index_stats; Typically, if HEIGHT is >= 3, time to rebuild the index. If the number of DEL_LF_ROWS is high relative to the number of rows in the table for the index, time to rebuild the index. (note; rows may not always be present in this table) - Check storage sizes: if the number of blocks in the index exceeds 50% of those in the table, good candidate to rebuild. Lots of blocks in the index implies lots of deleted rows. Advantages to re-creating an index: compacts the index, minimizes fragmented space, or as a way to modify the storage clauses. Existing indexes can be used to create new indexes, speeding up the process. Note: Tom Kyte states that index rebuilds (on b*tree indexes, not bitmap) are NEVER needed. Why does an index become fragmented/sparse in nature? Frequent deletions; the index entries are not removed, simply marked as "unused" and made available for reuse, much like disk blocks on PC filesystems. Sequence'd indexes also cause index fragmentation, since B*-tree indexes are optimized for random data inserts. --- Q: What does alter table shrink space buy you in 10g? How is it different from alter index rebuild? A: Shrink space will consolidate the spare extents alter index rebuild will indeed rebuild and consolidate the index. --- Q: How do you move an index to a new tablespace? How do you move indexes to new tablespaces? A: Alter index rebuild tablespace ; --- Q: I get an "ORA-01031: insufficient privileges" error when trying to do alter index rebuild online for an index in someone else's schema, despite having alter any index and create any index? What is issue? A: Note:313208.1 alter index rebuild online (as opposed to the version w/o online) creates a journal table in the schema that owns the index, so the calling user needs "create any table" to allow creation of that journal table. --- Q: I get an "ORA-01031: insufficient privileges" when creating a view that selects from another user's view? A: because you have to have grants done specifically to the TABLES that the other user's view looks at. Or, if that isn't the issue, ensure that the user trying to create the view has grants DIRECTLY to the tables and not done via a role. Oracle struggles with the transitive property of granting select to a role, role to a user and then allowing that user to create a view based on these grants. --- Q: I get an ORA-10631: SHRINK clause should not be specified for this object error when issuing alter table X shrink space cascade. Why? A: Per SQL Reference manual, there are some restrictions. Tables cannot be shrinked if: - the table has function based indexes on it (most common answer) - the table is clustered - the table has a LONG column - the table is an IOT - the table is compressed - the table has an MV on it defined as refresh on commit. --- Q: How do you get data in the sys.index_stats table? A: You must run analyze index validate structure; Just computing/estimating stats isn't enough. However; this table only holds values for ONE index at a time! So you'd have to individually validate the structure of the index in question before getting usable stats. --- Q: Is it better to drop/recreate indexes or use the rebuild feature? A: alter index [index name] rebuild; is usually faster/better than outright dropping and recreating the index for two reasons: - the old index can be used to used to create the new one, using a fast full scan - the old index can be used by users while the new index is created. Syntax: alter index rebuild compute statistics online nologging Note: rebuilds also resets the "high water marks" or HWMs. --- Q: How do you get Oracle to ignore indexes marked unusable by the alter index command? (done when doing massive loads) A: alter session set skip_unusable_indexes=true; Q: Followup: how do you get Oracle to permanently have this option set? A: ??? Is it an init.ora value?? --- Q: Is it a myth that frequent deletions causes fragmentation in indexes? Is it a myth that deleted space is never reused in indexes? A: Yes generally speaking it is a myth. - Index space freed by deleted entries can be reused w/o rebuilds (this is the myth). - index leaf blocks are only placed back on the free list when completey empty However, as a result of this, monotonically increasing tables with deletes across the spread of the data will end up being fragmented and have holes, since the leaf blocks won't be released (b/c generally there won't be a "group" of nubmers cleaned out. Bulk deletes without re-inserts will also result in fragmentation ... but will NOT render the data unusable. If you have a table constantly filled then completely cleaned, there is no fragmentation issue. The pages are completely cleaned, thus put back on the free list. --- Q: Is it a myth that indexes become "unbalanced" or "skewed" over time? Q: Is it a myth that you have to rebuild your indexes? Rebuild indexes? A: yes its a myth: leaf pages in an index are on the same level in a b*tree index structure. Some of the pages can become less densley populated than others (naturally, since data can be deleted and thus the index pages deleted). Most indexes never need to be rebuilt, ever. --- Q: What does "coalesce" do? A: "alter index [index name] coalesce" causes Oracle to merge adjacent index blocks which are only partially filled, thus freeing up space. coalesce Gets much of the benefit of a rebuild, without the cost. This is a recommended maintenance activity on Oracle 8+. --- Q: What are some tips/tricks to speed up the rebuild of large indexes? A: 1. create the index local, unusable, 2. use several sessions to rebuild the individual partitions in parallel. 3. As rebuild index also need sort, give it a huge sort area size and retained size. 4. as rebuild will do full table/partition scan, give a big db_file_multiblock_read_count; 5. use nologging to speedup the action. 6. If you can bounce the instance, give a huge log_buffer helps when you are building index in logging mode. We got good results when we tweaked the memory within the text index parameters to hundreds of megs (in our case it was about a 50G index). I remember a gotcha something along of the lines that if even if you asked for (say) 500M of memory, then if this exceeded the "global" ctx memory parameter (set with ctx_adm), then it would be silently adjusted down... Plus of course cranking up sort_area_size / sort_area_retained_size parameters, partitions in parallel etc etc --- Q: How do you rebuild local indexes? Alter index rebuild won't work. A: -- if index is just partitioned... select 'alter index ' || index_owner || '.' || index_name || ' rebuild partition ' || partition_name || ' tablespace indexes;' from dba_ind_partitions where index_name='FU15MINO_ROOM_UNIT_SK_XBMP'; -- if the index is subpartitioned... select 'alter index ' || index_owner || '.' || index_name || ' rebuild subpartition ' || subpartition_name || ' tablespace indexes;' from dba_ind_subpartitions where index_name='FU15MINO_ROOM_UNIT_SK_XBMP'; --- Q: If I truncate a subpartition of a table, do the local indexes get invalidated? A: Nope, but global indexes do. --- Q: Can I disable an index in place? A: Only if its a function based index; otherwise you'll get an error. --- Q: Can I mark an index unusable on purpose so that it isn't updated while I'm loading? A: Yes alter index ind_test1 unusable; However, running this in Toad generally results in an error that can be overcome by running it as a Toad Script and/or running in sqlplus command line. --- Q: What is the syntax to truncate a partition? A: ALTER TABLE sales TRUNCATE PARTITION dec98; --- Q: How is space allocated in Oracle? How do you manage space in Oracle? A: Tablespaces have segments. Segments have extents. Extents have datablocks. Segment level: Segments default to 50% pct_increase. In other words, when a segment gets 1/2 full, it spawns a new one. Not very efficient. Better to make pct_increase 0, and next be 10% of initial. This forces a segment to be 90% of its orig. size before getting a new one. Table Level: Several space parameters control the growth of the table. Syntax: create table tablename (colums, columns, storage (initial xK next xK pctincrease 0 minextents 1 maxextents 250) - initial: specifies the number of extents to initially give the table. If fast growing, give a bunch at the onset. Recommended to set the initial extents big enough to completely encompass your data size. - next: Specifies the size of the next extent to allocate if the initial (or last allocated) extent fills up. ALWAYS make this the same size as initial (in DMT environemnts) or you'll fragment. - pctincrease: defaults to 50: this is the percentage increase in extent size over the LAST allocated extent that the next one shall be. NEVER set this to anything besides 0 (which forces uniform extent growth). - minextents: - maxextents: - initrans: defaults to 1: only change it if you anticipate lots of DML (inserts/updates/deletes) on the exact same block at the exact same time. pctfree: specifies the % of the data block to leave free to allow existing rows to be updated. If you make this too small, updates in place will cause rows to be "migrated" constantly. If you have High update rates, set higher than default of 10% (around 40%). If you have high insert, no updates, low deletes, then set lower than 10%. pctused: the "reverse" of pctfree: specifies the minumum threshold percentage of a block that must be breached before allowing new inserts. In high delete operations, set this high to allow reuse of data blocks. In low delete operations, safe to allow this to be lower than default. Defaults to 40, set to 60% if not many deletes. Example: if you have a load-only table with no updates and no deletes, set pctfree=0, pctused=1. block size: if you have a large row, growing largely, make block size bigger. Note: with LMT in 9i, you can use "segment space management auto" and avoid ever having to worry about these values. --- Q: Do storage clauses on the tablespace (extent sizes, max, mins, etc) automatically get used, even if such clauses are used on the tables underneath? A: Yes; if you omit a storage clause on table create, the tablespaces' storage clauses are used. However, if you manually specify a storage clause at table create, the tablespace's storage clause values are overridden. BUT if you do this, you pretty much guarantee table fragmentation. --- Q: Should I worry if I'm using too many extents? A: Arguable. Note: In earlier versions of Oracle (v7.3.x) the maxextents was capped based on your block size (example, using 8k blocks a table was limited to 505 extents, ever. If you tried to use a maxextents value larger than that, the table create would fail). v8 introduced "unlimited" option, eliminating that concern. Yes: Fewer extents means less I/O when doing table scans. The time it takes for Oracle to allocate/deallocate space is proportional to the number of extents, thus fewer extents means better performance. Also, large numbers of extents will mean delays when dropping the table because of extended system table clean up (which track extents line by line). Administratively, unless you're using a massive extent size you can always deal with fewer (but larger) extents by reorganizing the table. - Strategy: Once you reach an arbitrary number of extents (1024 as per the SAFE document) move the table to a tablespace w/ a larger extent size. 4096 is really the maximum number of extents you EVER want to have for a table, per the SAFE document. Other experts put the limit at 270-300. No: with LMTs the extent size becomes meaningless, sine Oracle does all the managing for you anyway. And, the concern with disks is with fragmentation on the disk block level, NOT extents. As long as the extent size is a multiple of the db_block_size, you won't have fragmenting. Also, if you're using the indexes, it doesn't matter what the table extents look like. --- Q: Is there any disadvantage to using overly large extent sizes with relatively small data size inside? A: - the obvious disk space waste - Performance hit?? --- Q: Why not just have one big extent? A: Answer: it might be nice, but utterly impossible to strive for. The maximum extent size is limited by your OS (2gb a familiar limit) because extents cannot span datafiles. Its apparently an old Oracle myth that storing segments just one extent maximises performance. Oracle v7: apparently there were bugs in the multi-extent algorithms in Oracle, so the "single extent" concept was widely used. Oracle v8.0: 2gb limit Oracle v8i: limitation only on your OS (some 32-bit older OSs were limited to 2gb or 4gb). --- Q: Why not just size your extents incredibly small, to maximize disk usage? A: The i/o in getting a new extent is a huge penalty. The UET$ table contains a row for every extent and would grow exponentially. --- Q: Does it matter if your extents are contiguous? A: Nope: ?? details? --- Q: What are the default create table parameters for % free, number of extents, etc are? A: Tablespaces: - initial: (size of 5 data blocks) - next: (size of 5 data blocks) - pctincrease: 50 - minextents: default 2 - maxextents: depends on data size Segments: - pct_increase 50 Tables: - pctfree 10 - pctused 40 - initrans 1 - maxtrans 255 (function of data block size though) - initial 4M (taken from tablespace definition?) - next 4M (taken from tablespace definition?) - minextents 1 - maxextents 4096 - pctincrease 0 - freelists 1 - freelist groups 1 Indexes - initrans 2 - maxtrans 255 - pctfree 10 - initial 4M (taken from tablespace definition?) - next 4M (taken from tablespace definition?) - minextents 1 - maxextents 4096 - pctincrease 0 - freelists 1 - freelist groups 1 --- Q: What is the overhead per row in Oracle, as compared to Sybase? A: - Sybase's overhead on a 2k page is 86 bytes. - Oracle's is variable: minimum of 84 bytes to 107 bytes ?? what makes it vary so much? --- Q: How do I get the average row size of a table? A: Two ways: - update your stats then select avg_row_len from dba_tables where table_name='BOSS_TEST'; - without updated stats, a convoluted query grabbing the average size of each column in the table and adding it together. This method causes a full table scan and is ridiculously inefficient. Just dow method one. --- Q: How do I solve "ORA-01658: unable to create INITIAL extent for segment in tablespace TEMP?" A: First check the segment, make sure it's not created as "Permanent." Otherwise, have the dba extend the segment. Temp's tablespace is full. --- Q: what is impact of LMT tablespaces with the manual versus automatic segment_space_mangement? A: ??? --- Q: How can I get a schema-object only backup of my user schema, without any of the table data? A: exp .... rows=n --- Q: What does the "query" option do for exp? A: Allows a subset of the table (as delineated by the where clause of your query) to be exported. Example: ?? --- Q: In 10g, how do you query the recycle bin/trash bin? A: sql> show recyclebin sql> select * from user_recyclebin; sql> select * from dba_recyclebin; --- Q: How do you clear the recycle bin? A: Several ways: by table: purge table BIN$... or purge table orig tablename by tablespace: purge tablespace sandbox; by user/by tablespace: purge tablespace XXX user YYY; by user: logging in as user: PURGE RECYCLEBIN; by entire database: as sysdba: purge dba_recyclebin; coincidentally, you can "drop table purge" and accomplish the same thing. --- Q: Can you create a table that reads from a .CSV file (like you can do in mysql)? A: yes! in 10g. prerequisites: need a database directory pointing at the f/s location, need to give the user read/write on that directory create or replace directory xtern_data_dir as '/home/oracle/bosst'; drop table t1; CREATE TABLE t1 ( c1 NUMBER, c2 VARCHAR2(30) ) ORGANIZATION EXTERNAL ( default directory xtern_data_dir access parameters ( records delimited by newline fields terminated by ',' ) location ('report.csv') ) / --- Q: What is ASM? Automatic Storage Management? A: Acronym: Automatic Storage Management From 10g concepts manual: Automatic Storage Management automates and simplifies the layout of datafiles, control files, and log files. Database files are automatically distributed across all available disks, and database storage is rebalanced whenever the storage configuration changes. It provides redundancy through the mirroring of database files, and it improves performance by automatically distributing database files across all available disks. Rebalancing of the database's storage automatically occurs whenever the storage configuration changes. Basically, ASM takes away the need to assign tablespaces to filesystems and eliminates a common DBA headache of manipulating .dbf file locations. ASM is its own filesystem, and comes with a tool (asmcmd) to move around the file system. --- Q: How do you convert a database that's non-ASM to start using ASM? A: Example in oracle doc sets: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/storeman.htm#ADMIN036 Apparently its as easy as defining disk groups on existing file systems and then moving on ... --- Q: How are files created in ASM? A: per the Database utility document, ASM generates filenames according to the following scheme: +diskGroupName/databaseName/fileType/fileTypeTag.file.incarnation so, examples of archive log: +DATA/dcsedwp/arc12477_0622814479.002 backup file: +FLASH/dcscadp/backupset/2008_11_02/annnf0_tag20081102t17344 Control File Copy: +FLASH/dcscadp/controlfile/backup.1471.674616551 --- Q: What are the various commands available in asmcmd? A: http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/asm_util.htm#table cd change directory in ASM pwd print working directory ls lists contents help help prints out more help find lets you quickly find any and all files. exit exits du disk usage from point lsct lists all clients connected to ASM lsdg list all disk groups and their attributes (same as ls -lsa from top) mkalias makes aliases rmalias removes aliases mkdir makes asm directories rm allows you to remove ASM directories, files, etc. -r for recursion. --- Q: How are asm files generated? What do the different parts of the asm filename mean? A: asmcmd> cd +data/yoursid/datafile asmcmd> ls -l ... DATAFILE UNPROT COARSE JAN 19 17:00:00 Y SYSTEM.296.676569693 DATAFILE UNPROT COARSE JAN 19 17:00:00 Y UNDOTBS1.1150.676569673 DATAFILE UNPROT COARSE JAN 19 17:00:00 Y UNDOTBS2.1225.676569689 DATAFILE UNPROT COARSE JAN 19 17:00:00 Y USERS.759.676569675 ... first part is the TS name, second part is the fileid, the third is the incarnation. The 2nd.3rd guarantees uniqueness. --- Q: What are some good ASM documents to read? A: http://docs.oracle.com/cd/E11882_01/server.112/e18951/asm_util004.htm#OSTMG94549 : ASMCMD lsdg man page https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=368460638566036&id=1187723.1&_afrWindowMode=0&_adf.ctrl-state=r65sr9u8c_138 https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=369938805302344&id=332180.1&_afrWindowMode=0&_adf.ctrl-state=r65sr9u8c_594 http://docs.oracle.com/cd/E18283_01/server.112/e16102/asmdiskgrps.htm http://docs.oracle.com/database/121/REFRN/GUID-5CF77719-75BE-4312-84A3-49A7C6A20393.htm https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=366750157522264&id=1551288.1&_afrWindowMode=0&_adf.ctrl-state=r65sr9u8c_4 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Performance/Tuning =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- --- Q: What is the common practice way to diagnose query issues in Oracle? A: Step by step - Pull the sql out of v$sql/v$sqltext/v$sqlarea/v$sqlplan and analyze the plan - enable 10046, 10053 traces, generate trace file - run tkprof or trace analyzer (trcanlzr) on the trace file. - run statspack during the sql execution, look for issues (replaces utlbstat/utlestat) --- Q: What are the important utility scripts to run before P&T actions can be done? or, How do I grant the plustrace role to a user, if I get SP2-0618 when trying to "set autotrace on" as a normal user? A: - Create the plan table: @$ORACLE_HOME/rdbms/admin/utlxplan.sql as the user who will be doing the explaining. - Run the plus trace script @$ORACLE_HOME/sqlplus/admin/plustrce.sql to create the plustrace role. NOTE: you must be SYS, NOT system to have this work properly! - grant plustrace to [user] to allow others to use autotrace. Oracle 11g updated: - --- Q: I set autotrace on and get: "SP2-0613: Unable to verify PLAN_TABLE format or existence" A: run @$ORACLE_HOME/rdbms/admin/utlxplan.sql to create the plan_table. Note, the next time you set autotrace on, you'll get another error, but if you do a select * from dual; the autotrace function will work. --- Q: Even after creating the plan table and the plustrace role, and granting plustrace to a user, I still get: SP2-0618: Cannot find the Session Identifier. Check PLUSTRACE role is enabled SP2-0611: Error enabling STATISTICS report A: log out of the user, log back in. the plustrace role won't take if the user is still logged in. ahhh. --- Q: How do you tell which query plan, which index is getting used? A: (see oracle_admin for exact sql) 2 ways: sql> set autotrace on: /* collects execution plan */ sql> set timing on /* keeps track of elapsed time */ sql> execute your sql here ... This will execute the sql, then print the plan used (along w/ timing stats). If you don't want to actually execute the sql ... do this: - create the plan table first, if it doesn't exist: sql> explain plan set statement_id='123' for sql> execute your sql here... Explained. sql> select * from plan_table where statement_id='123'; (note; you can really clean this select up) OR, you can trace SQL without running them and get the optimizer plan by doing this: dw@EHRIDW> set autotrace traceonly; --- Q: What is a better way to look at plans than just selecting from the plan_table? A: select * from table (dbms_xplan.display); select * from table (dbms_xplan.display('plan_table',null,'all')); dbms_xplan column definitions (besides the obvious ones) pstart: Partition start pstop: Partition stop TQ: Table Queue number (when using parallel) In-Out: Table-Queue type (when using parallel) PQ Distrib: Table Queue distribution method (when using parallel) Note; somewhere in 9.2.0.x the plan_table changed output, so you may get this message: Note: PLAN_TABLE' is old version. If so, re run utlxplan.sql and (assuming you've properly upgraded your Oracle software) you'll get the updated plan_table. --- Q: Is it true the optimizer chooses different paths than reported in the Explain Plan output? Is this an Oracle Myth? A: Yes this happens. It is not a myth. Explain plan is a guess as to what is going to happen before it happens. When Oracle goes to actually execute the query it may take a different path. --- Q: What is a "10046" and/or a "10053" trace? What is the difference? A: - 10046: a dump of the sql execution trace - 10053: a dump of the optimizer decisions --- Q: What are the different "levels" in the 10046/10053 trace output? A: - 1: Standard sql_trace functionality - 4: level 1 plus bind variable`s - 8: level 1 plus waits - 12: level 1 plus bind variables and waits --- Q: How do you setup a "sql trace?" (aka a 10046 trace) What is the syntax? A: Old school way - ALTER SESSION SET sql_trace=true or set SQL_TRACE=true in the init.ora file set autotrace on explain stat set timing on (or alter session set timed_statistics=true;) alter session set events '10046 trace name context forever, level 12'; (or dbms_support.start_trace(waits=true binds=true)); alter session set events '10053 trace name context forever, level 12'; ..execute your sql alter session set events '10046 trace name context off'; alter session set events '10053 trace name context off'; set timing off set autotrace off ------------- New-school way How to Enable SQL_TRACE for Another Session or in MTS Using Oradebug (Doc ID 1058210.6) Get the pid of your session: select pid, spid, username from v$process; sql> oradebug setorapid 123 SQL> oradebug unlimit SQL> oradebug event 10046 trace name context forever, level 12 ... when done SQL> oradebug event 10046 trace name context off sql> oradebug tracefile_name -- this will print out the path and name of the trace file being generated This put a file with ".trc" extension in $ORACLE_BASE/admin/sid/udump directory. Then go to that directory and: % tkprof sid_smon_pid.trc out.prf sys=no (the sys=no eliminates a lot of sys activity from being analyzed, making the out.prf file much easier to read) ---------------------- Other ways to start traces. (Why are there so many ways to do this??) - dbms_support.start_trace_in_session; - dbms_session.set_sql_trace(true); - dbms_system.set_sql_trace_in_session(sid,serial#,true) (see two questions down...) - dbms_system.set_ev(sid, serial#,10046,level,''); - dbms_profiler: better for analyzing pl/sql statements --- Q: How do you use tkprof? A: run tkprof against a .trc tracefile output as generated by one of the 10046/10053 or other sql tracing methods. $ tkprof sid_smon_pid.trc out.prf sys=no (the sys=no eliminates a lot of sys activity from being analyzed, making the out.prf file much easier to read) another example: $ tkprof edw16_ora_24657.trc edw16_ora_24657_tkprof.out SYS=no explain=dbsnmp/dbsnmp You can See these two metalink notes: TKProf Basic Overview (Doc ID 41634.1) Note 41634.1 TKPROF and Problem Solving Note 29012.1 QREF: TKPROF Usage - Quick Reference --- Q: How do you 10046 trace a session that is not your own? A: per Metalink Note:100883.1 How to Create a SQL Trace from Another Session - dbms_system.set_sql_trace_in_session(sid,serial#,true) as user sys: alter system set timed_statistics=true; exec dbms_system.set_sql_trace_in_session(82,39755,true); ... run whatever in the session you're monitoring exec dbms_system.set_sql_trace_in_session(82,39755,false); and the .trc file should be in the udump dir. tkprof it as above. --- Q: How do you run 10053 traces? A: note: the 1 gets less information than the 2 in the 4th field on your session: alter session set events '10053 trace name context forever, level 1'; ... run sql alter session set events '10053 trace name context off'; To monitor someone elses' session: exec sys.dbms_system.set_ev (sid,serial,10053,1 or 2,''); ... run sql exec sys.dbms_system.set_ev (sid,serial,10053,0,''); example: exec sys.dbms_system.set_ev (79,1091,10053,2,''); exec sys.dbms_system.set_ev (79,1091,10053,0,''); --- Q: How do I generate a Testcase for Oracle? A: How to Create a SQL testcase Using the DBMS_SQLDIAG Package [Video] (Doc ID 727863.1) SQL> create directory exp_tc as '/tmp'; declare tc_out clob; v_dir VARCHAR2(20) := 'EXP_TC'; begin dbms_sqldiag.export_sql_testcase(v_dir, sql_id=>'6d64jpfzqc9rv', exportMetadata=>TRUE, exportData=>FALSE, testcase=>tc_out); --- Q: What is the new way to get trace data? What is the tkprof/10046/10053 replacement? A: SQLT or SQL-T: though it isn't a "replacement" for 10046/10053 traces, just an additional tool 1. Download the tool; link available in this doc id: How to Collect Standard Diagnostic Information Using SQLT for SQL Issues (Doc ID 1683772.1) All About the SQLT Diagnostic Tool (Doc ID 215187.1) 2. unzip the sqlt.zip package to a directory on your Oracle server (ex: /opt/oracle.SupportTools/sqlt 3. grab a local copy of the sqlt_instructions.html so you can use it as a guide 4. If not already done, install it. select count(*) from dba_users where username='SQLTXPLAIN'; and if user not there, then $ cd sqlt/install $ sqlplus / as sysdba SQL> START sqdrop.sql and $ cd sqlt/install $ sqlplus / as sysdba SQL> START sqcreate.sql the start script asks for the pwd of the SQLTXPLAIN user and for a "default" user to use. 5. Log in as either the default user or as a sys 6. Grab the sql_id you want to trace. Trace like this: $ cd sqlt/run $ sqlplus appuser/pwd (or whatever your default user is, or sys/system, but it works better if you run it as the appuser not sys/system) SQL> START sqltxtract.sql [SQL_ID]|[HASH_VALUE] [sqltxplain_password] SQL> START sqltxtract.sql 0w6uydn50g8cx sqltxplain_password SQL> START sqltxtract.sql 2524255098 sqltxplain_password gzayw2xak1t8r This will run for 20-30mins depending on complexity of the sql. You can monitor it like this: SELECT * FROM SQLTXADMIN.sqlt$_log_v; 7. When done , a .zip file is dropped in the working driectory: sqlt_s57784_xtract_5cs9pu60rsxgp.zip as an example. Scp/ftp it locally to examine. It may be large: 25-100Mb 8. Within the zip file, there will be 6 useful HTML files: - sqlt_s57784_lite.html: Basic information: explain plans, list of tables, columns and indexes - sqlt_s57784_main.html: the Primary file to look at - sqlt_s57784_readme.html: all sorts of commands to do exp/imps, compare sql plans, etc. - sqlt_s57784_sql_detail_active.html: SQL Detail Report: one-off OEM report showing SQL Details in graphical format; detailed plan information. - sqlt_s57784_sql_monitor.html: Sql Monitor Report; shows graphical depictions of the core plan resolving the sql. - sqlt_s57784_sql_monitor_active.html: Output of the Cloud control Monitored SQL Execution Details, with plan stats, plan details, waits graphically depicted, etc. Other helpful SQLT doc ids: - All About the SQLT Diagnostic Tool (Doc ID 215187.1) - SQLT Usage Instructions (Doc ID 1614107.1) --- Q: My trace files are getting too big, and I can't set max_dump_file_size to be greater than 2gb? What do I do? Q: How do I limit the size of my trace files? A: set dump file size to unlimited alter session set max_dump_file_size=unlimited; alter system set max_dump_file_size=unlimited; dbms_system.set_int_param_in_session can't set it unlimited; bug/limitation. To limit to a certain size: dbms_system.set_int_param_in_session(r.sid,r.serial#,'max_dump_file_size',) --- Q: What do utlbstat.sql and utlestat.sql do? A: These are the "begin" and "end" statistics collecting procedures that can be used to help troubleshooting. Obsoleted by statspack release in 8.1.6 --- Q: What is Statspack? How do I use it? A: Oracle's equivalent to sp_sysmon in Sybase. A replacement for utlbstat/utlestat scripts included prior to its release in 8.1.6. - See Metalink Note:94224.1: FAQ- Statspack Complete Reference for more detail like this - also see $ORACLE_HOME/rdbms/admin/spdoc.txt; its a large FAQ about statspack. installation: connect /as sysdba and run @$ORACLE_HOME/rdbms/admin/spcreate.sql It creates its own user (perfstat/perfstat), tables and views. Then, similarly to Sybase's sp_sysmon you can run the statspack for a time interval and generate results. Usage: connect as new perfstat user and wrap snapshots around a period of time you want to monitor. example: SQL> exec statspack.snap; .. do whatever you're monitoring SQL> exec statspack.snap; -- to end the snapshot period. then run SQL> @$ORACLE_HOME/rdbms/admin/spreport.sql it will prompt you for a start and end period to analyze, then generate a report (spooled to screen and to the file sp_X_Y.lst where X is begin time, Y is end time). Note: statspack is now basically obsoleted by AWR. --- Q: What do the various high-level sections mean in the report? A: Metalink Note:228913.1: Systemwide Tuning using STATSPACK Reports has an overview of the various sections and what they mean... but interpreting the Statspack and knowing what values area "good" and what are "bad" is really more of an artform than anything else. From top to bottom, some areas to study: - Load Profile: look for large Redo Size, lots of reads/writes - Instance Efficiency Percentages: look for *very* low percentages unless you're in a DSS environment when lots of full table scans lower this ratio. Negative hit ratios indicate the cache/shared pool is undersized and is thrashing (VERY bad). should never happen in 10g w/ sga_target/automatic memory management turned on. - Top 5 Timed Events: these are the first places to look. More information on timed/wait events is in $ORACLE_HOME/rdbms/admin/spdoc.txt. - Time Model System Stats: gives a list of the high level activities and how much time they each took. DB CPU, sql execute elapsed time indicate heavy db time executing jobs. - Wait Events: first major section to observe. Works in conjunction with above Timed Events. Look for abnormal wait events for the duration of the run, but ensure the wait time is useful. Note that the IDLE wait events are listed last and can be ignored (typical Idle wait events are SQL*Net message from/to client, Streams AQ wait events, jobq slave wait, virtual circuit status, etc. - Background Wait events, Wait Event Histogram; less useful typically - SQL listings; SQL commands are listed in major sections. Simlar to RDA queries, look for non-system queries that have abnormally large values for these thresholds. o by disk reads (> 1000 disk reads) o by executions (> 100 executions) o by parse calls (> 1000 parses) - Instance Activity Stats: all sorts of stats about what's goign on in the instance. - OS Stats: basic stats from os/level - Tablespace/File I/O Stats : self explanatory - Buffer Pool Stats/Advisory: can get a buffer cache hit ratio for your default and any other pools you may have (keep, recycle, diff sized). - PGA Areas: cache hit, area stats, histograms and memory advisory; - Process Memory stats: never seen much useful here. - Undo segment stats: with managed undo, not much here either. - Latch activity/stats/etc: - Cache stats: dictionary, library; looking for lots of misses - Shared Pool advisory - SGA Advisory and breakdowns - Finally, a quick printout of nondefault init.ora parameters. --- Q: How do I interpret Statspack output reports? A: 1. review load profile and Instance Efficiency sections for major issues 2. Go to top 5 timed events; these are the issues that really need to be addressed. Based on which event is listed, then go to that particualr event. If your top wait events are "DB file scattered read" and "db sequential read" then there's not much you're going to be able to tune on the instance. 3. Look at Wait events; these go hand in hand w/ top 5 timed events. Ignore idle events 4. Look at SQL reports, look for abnormal queries by non-system users. 5. Skim instance activity stats looking for abnormal values 6. Look at Tablespace/File I/O stats, looking for unexpected i/o For items 7-10; take into consideration the scope of your analysis; if you're just running a 5 minute sample, these may not be totally accurate. 7. Buffer Pool analysis: Can use this to determine if the caches need re-sizing (though in 10g, this is done automatically for non custom pools) 8. PGA Target Advisory: can use to determine if resizing is needed. 9. Dictionary and Library Cache misses: look for any large % miss values. Recommendation is no more than 2% missing. Balance the % misses with the number of times it was executed (i.e., if it missed 1 out of 4 times the % will be 25% but its a negligible incident). 10. Shared pool and SGA advisory; possibly tune pools and sga up or down --- Q: How do I clean out statspack information once its not needed anymore? A: exec statspack.purge or sppurge.sql or sptrunc (removes all) --- Q: What does it mean if I get the following errors when running snapshots? ERROR at line 1: ORA-06550: line 1, column 7: PLS-00201: identifier 'STATSPACK.SNAP' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignored A: the perfstat user probably wasn't created properly. If you drop and recreate perfstat user, the spcreate.sql script will FAIL because the drop user script doesn't drop all the public synonyms that get created. Run the output of this script first: select 'drop public synonym ' || synonym_name || ';' from dba_synonyms where table_owner='PERFSTAT'; Always check the output files; any error means it didn't create properly. --- Q: What is Automatic Session History (ASH)? A: New feature in Oracle 10g that gives detailed statistics from in-memory performance monitoring tables; most importantly Wait events. --- Q: What is Automated WorkLoad Repository (AWR)? A: 10g automated version of Statspack, designed to keep a repository of the statspack information longer term. --- Q: What is Automatic Database Diagnostic Monitor (ADDM)? A: new feature in 10g that analyzes two different snapshots taken in AWR and makes recommendations of what to do. A fabulous tool! --- Q: What is an Active Session History (ASH) report give you? A: The Active Session History (ASH) was introduced in Oracle 10g to provide real-time diagnostics information. ASH Analytics is a feature of Enterprise Manager Cloud Control 12c, which visualises ASH information, making it even simpler to diagnose performance problems. --- Q: How do I run command-line AWR ADDM or ASH reports? A: 1. Log into your server 2. log in as system 3. run $ORACLE_HOME/rdbms/admin/ashrpt.sql or 3. run $ORACLE_HOME/rdbms/admin/awrrpti.sql or 3. run $ORACLE_HOME/rdbms/admin/addmrpt.sql community.xmatters.com/docs/DOC-2264 Update: better information from Satish Atmuri at CB: Log into database @?/rdbms/admin/awrgrpt (for rac) @?/rdbms/admin/awrrpt (for the instance your connected to) awrsqrpt for a particular SQL ID awrdiffrpt for comparing two different snapshot intervals of a db --- Q: How does one monitor Wait Events in Oracle? A: Run statspack spreport.sql, analyze "wait events" section. or, select * from v$system_event and look at "total_wait" and "total_timeouts" select * from v$system_event order by time_waited desc; select * from v$session_event select * from v$session_wait Some comments on frequently seen wait events (info from asktom mostly) - db file sequential reads are usually caused by index accesses (single block IO) - db file scattered reads are usually caused by scanning (multi-block IO) --- Q: What is a good strategy for "Analyze" tables, columns, indexes? (equivalent to update statistics on Sybase/MS Sql server) A: - Old way: running analyze table compute/estimate statistics nightly (officially depreciated in 9i, more or less obsoleted by dbms_stats command in 8i) - new way: as of 8i: dbms_stats. Can do statistics computation in parallel, only gets statistics requiring calculation (so as not to re-create stats that havn't changed). Can run nightly if tables aren't too big, otherwise can run an "estimate" stats option nightly and perhaps a calculate stats weekly. Always use the package b/c it calculates histograms for you. exec dbms_stats.GATHER_INDEX_STATS: Collects index statistics. exec dbms_stats.GATHER_TABLE_STATS: Collects table, column, and index statistics. exec dbms_stats.GATHER_SCHEMA_STATS: Collects statistics for all objects in a schema. exec dbms_stats.GATHER_DATABASE_STATS: Collects statistics for all objects in a database. Call these like this: SQL> exec dbms_stats.gather_schema_stats('ejp'); /* where ejp is the schema/tablespace name */ (this won't run from Toad, only command line in sqlplus). (note: this is actually doing this: exec dbms_stats.gather_schema_stats(ownname=>'ejp'); since the first value is the schema name). Strategy: - run compute nightly on smaller oltp tables - run estimate nightly on large or 24/7 oltp tables - run compute weekly on dss tables - run compute weekly on large or 24/7 oltp tables - run compute once/sporadically on static tables - Wait til tables are computed before starting to compute index stats - Always fully compute index stats (never bother estimating: they run fast) - Use dbms_stats.gather_schema_stats with a sample size of 20%. Oracle will automatically do a full calculation of stats. - NEVER analyze SYS or SYSTEM's tables; they're finely tuned to use the RBO to perform well. If you do this by accident, quickly delete the stats. Note "analyze table compute statistics" actually defaults to this command: analyze table compute statistics for table for all indexes for all columns size 1 Therefore, a full "analyze table" command gives all your indexes a "free" analyze as well. - dbms_stats.gather_schema_stats does NOT perform index stats computation unless you use the "cascade" option. exec dbms_stats.gather_schema_stats(ownname=>'scott',cascade=>true); - Some suggest doing a cascade=>false, then specifically fully analyzing indexes. --- Q: Can I automatically gather statistics on my database (outside of a custom script?) Auto Stats gathering/Automatic job to gather stats A: 8i: no 9i: - Roughly, at a table level. alter table "monitoring" instructs Oracle to roughly keep track of the number of rows since the last gather stats was performed. - set "dynamic sampling" to "4" and statistics will be "sampled" at every parse. (See below for overview of dynamic sampling) - dbms_job can be run that gathers stats for you 10g and up: yes via Automatic Stats gathering. see https://blogs.oracle.com/UPGRADE/entry/automatic_statistics_gathering_job_preferences and Metalink Note:1233203.1 - FAQ: Automatic Statistics Collection Oracle Database 11.2: http://docs.oracle.com/cd/E25178_01/server.1111/e16638/stats.htm#i41282 Oracle Database 12.1 https://docs.oracle.com/database/121/TGSQL/tgsql_stats.htm#TGSQL-GUID-E4EFD512-EAF9-4AF3-943F-FDEC7E47B23C --- Q: What is the impact of the "granularity" option in dbms_stats.gather_table_stats? A: Huge impact on stat gathering when your tables are partitioned. Options: Granularity of statistics to collect (only pertinent if the table is partitioned). DEFAULT: Gather global- and partition-level statistics. SUBPARTITION: Gather subpartition-level statistics. PARTITION: Gather partition-level statistics. GLOBAL: Gather global statistics. ALL: Gather all (subpartition, partition, and global) statistics. --- Q: What is "Dynamic Sampling?" A: from Oracle-L Discussions 8/7/04, specifically Mladen Gogala ?? --- Q: What is the importance of Histograms? A: A Histogram is a statistical representation of data skew within a column of non-unique values. A frequency distribution. The CBO can use histogram data to pick index plans and return data faster. Without histograms, Oracle assumes that data has a uniform distribution amongst distinct values in a column and queries accordingly, but if you have significant skew, then you want to tell the Optimizer ahead of time. Simple example: VA-based store has a customer table and 90% of the customers live in Virginia. That data is heavily skewed towards VA. SQL> select * from dba_histograms for example. Histograms are created for you automatically when the dbms_stats.gather_table_stats without specifying the "size" option in method_opt. E.g.: SQL> exec dbms_stats.gather_table_stats ('scott','tiger',method_opt=>'FOR ALL INDEXED COLUMNS', cascade=>TRUE); Custom histograms can be created by changing the parameter in the "size" command: This is from the Oracle manual, and forces a 10 "bucket" histogram on the sal column. SQL> exec DBMS_STATS.GATHER_TABLE_STATS ('scott','emp', METHOD_OPT => 'FOR COLUMNS SIZE 10 sal'); Of course, Oracle recommends NOT making custom Histograms, instead allowing the dbms_stats package to make this decision for the user. SQL> exec DBMS_STATS.GATHER_TABLE_STATS ('scott','emp', METHOD_OPT => 'FOR COLUMNS SIZE auto'); There is debate whether extraneous Histograms causes performance issues. You can specifically only gather known skewed data with the method_opt=>'for all columns size skewonly' option, but this will cause lots of overhead. Example: SQL> exec dbms_stats.gather_table_stats ('scott','tiger', \ method_opt=>'for all columns size skewonly',cascade=>TRUE); http://asktom.oracle.com/pls/asktom/f/f?p=100:11:0::::P11_QUESTION_ID:66812720989778 Good asktom thread on the topic. --- Q: What are the types of Histograms collected? A: detailed explanation: https://docs.oracle.com/database/121/TGSQL/tgsql_histo.htm#TGSQL366 select distinct(histogram) from dba_tab_columns; NDV = Number of Distinct Values Default in 11g: 254 buckets. - Frequency; simple Histogram used when there's very high cardinality data. Like 2-3 distinct values, or a number less than the default number of histogram buckets (254). - Top N Frequency: used when most of the data is among a small set of "popular" values while there are other outliers that have statistically insignificant numbers of rows. More efficient than a Frequency histogram. - Height Balanced; legacy histogram that just evenly divides the values into buckets so that each has the same number. Behavior changes between 11g and 12c with these types of histograms; if you re-gather after migration they'll become hybrid histograms. - Hybrid; a combination of the features of Frequency and Height Balanced. Created by forcing a number of buckets at the time of gather stats. - None: the data is uniformly distributed and does not need a histogram. Used with PKs, number values, varchars. --- Q: What is the valid range of "estimate_percent" option in gather_table_stats? A: The valid range is [0.000001,100]. Note: passing any estimate % greater than 49% will result in 100% computation. --- Q: Is there any advantage to running multiple versions of the gather_table_stats command, passing in seemingly mutually exclusive options (like "for all indexed columns" and "for all columns size skewonly")? A: ??? Ran a test: cascade=>true in all cases opt=> for all columns: dba_tables, dba_indexes, dba_tab_cols for all used cols), dba_histograms for all used cols FOR ALL COLUMNS size skewonly: same as "for all columns" FOR ALL indexed COLUMNS: dba_tables, dba_indexes, dba_tab_cols has only indexed columns, and dba_histograms only has histograms for the indexed columns FOR ALL COLUMNS size auto: dba_tables, dba_indexes, dba_tab_cols for ALL columns regardless of use, dba_histograms for all columns, regardless of use for all hidden columns: populates dba_tables, dba_indexes, then only dba_tab_cols and histograms for "hidden" columns (behaves the same way as indexed columns option) when I ran for all indexed columns, THEN ran for all columns size skew only, it OVERWROTE the stats gathered for all indexed columns, therefore there's zero value in running twice. Best option: for all columns size skewonly if you can afford it, for all indexed columns if you cannot. --- Q: How bad of a performance hit is it to run "analyze statistics" on a table? A: Per Oracle doc, the time necessary to computer statistics for a table is approximately the time required to do a full table scan and sort the rows. This can end up being a bit worse if a large table consumes all of memory and Oracle resorts to using a temp tablespace table to sort. To compute exact stats, Oracle needs enough ro om in memory to scan and sort ALL table records. If you use skewonly options, the timing goes up considerably. --- Q: Do Analyze Statistics commands block access to a table? Prevent DML? Do a table lock? A: dbms_stats should NOT block access to the table; its essentially performing selects (a full table scan) into memory to calculate data values. It does not lock any particular record. analyze table validate structure WILL lock the table for the duration of its execution, UNLESS you pass it the ONLINE parameter. ex: sql> analyze table xxx validate structure cascade online; --- Q: If I drop and load the same amount of rows on a regular basis (i.e., a nightly batch job that fully reloads the same table w/ the same number of rows) is it necessary to re-run a full analyze statistics after each load, or is one compute stats enough? A: Mixed opinions It may be "somewhat" accurate, given the exact same number of records, but unless the data varies only slightly, the key components of the statistics might change enough to warrant a full re-analyze statistics job each time. --- Q: Will analyze statistics commands flush out the buffer pools? A: Probably: its hard to say with certainty, but analyze stats has to do serious analysis of every data page for a table. A 100% analysis is the equivalent of doing a full table scan for each column. This much i/o activity into the db_cache is likely to clear out most of your default buffer pool. It does NOT automatically flush the entire buffer pool though. --- Q: Does an analyze table lock the table in any fashion (preventing updates/inserts?) A: No: analyze tables are simply selects of data pulled into memory for the purposes of heavy computing. No locking is done. --- Q: How do you manually flush the buffer pools? How do you dump memory/dump buffers? How do you dump oracle's memory? How do you clear memory? How do you clear cache A: Several methods - alter system flush buffer_pool_name; - alter system flush shared_pool; - shutdown/restart - force a huge tablescan - offline a tablespace (removes all its buffers from memory) 10g and up: alter system flush shared_pool; alter system flush buffer_cache; flush buffer_cache clears out every buffer in the SGA, including keep, recycle and default. note: do NOT do a flush of the buffer_cache on anything except a test system. It will remove the entire buffer_cache and all queries will have to incur disk i/o to execute. It will destroy performance on a prod box. --- Q: How do you flush a table from a KEEP or RECYCLE pool? A: alter table storage default --- Q: How do you check the usage of different sized buffer pools? Can I tell how full my keep and recycle pools are? (Equivalent of Sybase's sp_sysmon buffer usage section). A: v$mystat table: eg select sum(s.value) buffers_accessed from v$mystat run a query like this before/after a query to get an idea of the buffers its using. --- Q: What is more efficient to join on, varchar() or numeric fields? Is this an oracle myth? A: Numeric, but the difference is negligible as long as the varchar() fields are small (less than 10 bytes). Longer fields will result in far bigger index overhead and slower join performance. Do I have any proof of this??? Where did I get that statement? 5/25/06: discussion on Oracle-L: table joins result in three operations; i/o buffer gets, block parsing, and then comparison of table/index values within memory. All three operations will occur irrespective of the datatype. ---- Q: How does Oracle's Optimizer work? A: Two part answer: the old way and new way. Pre Oracle 8.0, only a "Rule Based Optimizer" was available in Oracle. The optimizer would analyze each query as it was executed and, based on a 15 point ranking system, would then satisfy the query based on the "best" rule it could satisfy. As of Oracle 7.0, a more conventional "Cost Based Optimizer" was introduced. RBO was kept around for backwards compatibility. Cost Based optimizing tries to compute/calculate the least "cost" of a query based on the known indexes/statistics. As of Oracle 8.0, Oracle ceased improving the RBO and thus implicitly asked all customers to migrate code/queries to the CBO. --- Q: What are the 15 ranking points of the RBO/Ranking Based Optimizer? A: 1. Single row by ROWID 2. Single row by cluster join 3. Single row by hash cluster with unique or primary key 4. Single row by unique or primary key 5. Cluster join 6. Hash cluster key 7. Indexed cluster key 8. Composite key 9. Single column indexes 10. Bounded range on index columns 11. Unbounded range on indexed columns 12. Sort merge join 13. MAX or MIN on indexed column 14. ORDER BY on indexed columns 15. Full table scan --- Q: What a good process to follow when converting RBO code to CBO? A: Mostly from a John Kanagaraj posting to oracle-l 11/13/03 * Trace all SQL coming into a live RBO-only system * Identify any code that uses the RULE Hint (in spite of being in a RULE based DB) * Create a clone of prod on a server of the same or similar capacity * Collect Statistics (COMPUTE if you can) * Set the OPTIMIZER_MODE to CHOOSE; review/reset other CBO related parameters (see my paper) * Let the Developers and UA testers loose on that Db * Use Cary's method to identify the top set of business processes and determine if the performance is Ok * If not Ok, then tune it... --- Q: How can you tell when the last time a table was analyzed? An index? A: - select last_analyzed from user_tables where table_name = 'name'; - select last_analyzed from user_indexes where index_name = 'name'; last_analyzed gets populated at the START of the dbms_stats job. --- Q: What is the hint syntax? A: select /*+ HintName(hintoptions) */ cola, colb from table ... --- Q: What are the "hints" available to a Query writer? What is a list of hints in Oracle? A: (note; some of these are new to 8i, not avail in 7.3.x) Syntax: select /*+ hintname hintoptions */ columns from tables where ... Query hints - All_rows: Forces strictly a cost-based approach w/ goal of best throughput (minimum resource consumption). - First_Rows: forces minimum resources to return first row. - Rule: Forces old fashioned rule-based optimizing (see elsewhere for the 15-step rules used to resolve queries) - Choose: Forces old fashioned rule-based optimizing if no statistics exist for any tables. If any statistics exist, CBO used. - Cache(table): forces blocks retreived for table to be tacked on the beginning/most recently used end of LRU. Useful for small lookup tables that you want to stay in memory but which may be blown out by huge scans. - NoCache(table): opposite of Cache; and actually the default behavior. - Push_Subq: causes nonmerged subqueries to be evaluated earliest. Access Method hints (these usually depend on an index/cluster) - Full(table): forces optimizer to always do a full table scan; works well when doing huge group bys, or when grabbing most columns in a table - RowID(table): forces access by rowid, not by some other means. - Cluster(table): forces a cluster scan (obviously only applicable if you have clustered objects). - Hash(table); forces a hash scan (only applicable to clustered tables) - Hash_AJ(table): transforms a "not in" subquery into a hash "anti-join" - Index(tablename index): forces use of index or indexes. Multiple index lists will result in the optimizer considering each index, and selecting the one index w/ the lowest cost. Passing an empty index list will automatically scan every index on the table and use the cheapest. The optimizer might also merge indexes in the list given to provide the answer (if passed in several indexes) - Index_asc(table index): forces ascending scan of the specified index. (note; this is current default behavior anyway; option provided in case Oracle decides to change things in the future). - Index_desc(table index): opposite of index_asc - Index_Combine(table index): forces optimizer to perform a boolean combination on the list of bitmap indexes passed to retreive the answer. - Index_ffs: FFS == Full Fast Scan; forces fast full index scan instead of a table scan. - Merge_AJ(table): transforms a "not in" subquery into a merge anti-join - And_Equal: allows several single-column indexes to be specified to the Optimizer. Forces optimizer to merge the results of several index scans together. Must specifity at least 2 indexes, no more than 5. - Use_Concat: forces combined OR conditions to be transformed into a compound query using a "union all." Note: Syntax below is: select /*+Ordered [join operator](table) */ cola, colb from table... select /*+index (table index) */ count(*) from table... Join orders - Ordered: forces oracle to join the tables in the explicit order provided. Otherwise the optimizer may join the tables in the order it sees fit - Star: forces a star query plan to be used (data warehousing only). Requires at least 3 tables, a composite index to be used as the "central" or star point with 3 columns - Star_Transformation: forces oracle to consider Join Operations: - Use_NL (tablea tableb): Forces nested Loop join. Since sometimes a nested loop can return rows faster than a sort-merge. - Use_Merge: forces a sort-merge join - No_Merge: prevents the merging of mergable views - Use_Hash: forces a hash join - Driving_Site: forces query execution to be done at a different "site" than normally chosen by the optimizer. Parallel Execution Hints: only when using parallel execution in your server. - Parallel - no_parallel (table_name). noparallel depricated in 10g - no_parallel_index (index_name) - Append - NoAppend - Parallel_index Note: Oracle normally uses one and only one B*tree index per table per step. A second (different) index might be used if the table appears again in the join list. The CBO can merge b*tree indexes to acheive its needs. CBO can also merge bitmap indexes if needed. --- Q: What happens if there is a syntax error in the hint? A: All hints AFTER the incorrectly formatted hint will be ignored. --- Q: How do I force the use of an index in Oracle? A: use an Index hint, as described above. Note: if your stats are fully uptodate, and the CBO still doesn't choose your index, forcing it may not be the best solution. --- Q: Can you use more than one hint in a SQL query? A: Sure; SELECT /*+ index(a index_A) index(b index_B) */ column1,column2,column3,column4 from table_a a,table_b b where a.cola = b.colb and ...; --- Q: Whats the best way to find all the stored procedure code for places where hints are being used? A: 1. searches stored pl/sql only: select distinct name from user_source/dba_source where type in ('FUNCTION','PROCEDURE','PACAGE BODY') and text like '%/*+%'; 2. search in v$sql during normal hours (perhaps v$sqlarea too but it will require more cpu resources) --- Q: How do I get a list of undocumented hints? How do I get information on these undocumented hints? A: ?? there's good lists of undocumented hints online and via google searches, but little more information about them. --- Q: What is the SYS_DL_CURSOR hint? A: a placeholder hint used in conjunction with Oracle's direct-path loading statement. If a directpath insert is executed, v$session/v$sqltext will show an insert statement in the form of: INSERT /*+ SYS_DL_CURSOR */ INTO table (columns) values (null,null,null...) Obviously not a valid insert statement, but utilized internally. Frequently seen during Informatica bulk loads. --- Q: How do I get the source of a view? A: select * from user_views; it has the text of the view right there. --- Q: how do I get the source code of a stored proc or package? How do I see the code of a stored proc? A: select text from dba_source where name='PKG_SRGCL_CASE'; --- Q: How can I see what SQL is running on my server? A: - v$sql, v$sqltext, v$sqlarea. See pre-formatted queries in oracle_admin.sql for good examples --- Q: How far back does v$sqlarea save sql statements? A: To boot. Thus, any analysis of the statements in v$sqlarea will be a "longer term" analysis of sql being executed on the database. --- Q: What are different joins/data access/index access methods noted in Explain Plans, and what are the best for specific types of queries? A: These are high level Major data access methods/join methods - Nested Loop - Hash Join - Sort-Merge join - Anti Join: Table Access Methods: - Table Access Full - Table Access by Index RowID - Table Access by Local Index RowID - Hash Scan: B*Tree Index access methods: - Index Fast Full Scan - Unique Scan - Range Scan - Range Scan Descending - Full Scan - Skip Scans - Join Scans Bitmap Index access Methods - Bitmap index Single value - Bitmap index Full scan - Bitmap Index Range Scan - Bitmap Conversion to Rowids - Bitmap And - Bitmap Merge - Bitmap Key Iteration Conversions/Transformations - Sort Group by - Partition Range All - Partition Hash All - Temp Table Transformation - Recursive Execution - Insert Statement - Load as Select - Buffer Sort - View Explanations: - Nested Loop: essentially performs a cursor-like operation, scanning through results in the "outer" or "driving" table, then looping through them and for each row in the inner table finds the joined rows. Very important to have relationships between the two tables, else duplicative row retrieval occurs. Best for small result sets, inefficient for large (>10,000 rows returned). cost= access cost of A + (access cost of B * number of rows from A) you can force a Nested loop with the hint /*+ USE_NL(table1 table2) */ - Hash Join: optimizer builds a hash table in memory, storing all the join keys of the (smaller) table, then scans through the second (larger) table looking for the joined rows. Efficient when the smaller table fits entirely in memory. When the smaller table cannot fit into memory, a hash-partition occurs and oracle writes some partitions to temp. Swapping partitions in and out can occur if the smaller table cannot fit into memory, decreasing performance. Best for larger (>10,000 rows) when using the CBO. Used when two large tables are joined, or if a large % of a table is retreived. Affected by the hash_area_size and hash_join_enabled paramters. You cannot use hash-joins unless your join condition is equality. cost= (access cost of A * number of hash partitions of B) + access cost of B You can force a hash-join with hint /*+ use_hash(table1 table2) */ - Sort-Merge join: best used to join rows from two tables that don't have an established connection to each other. Not seen normally, since the hash-join will be defaulted to use. However, sort merge will be used if the data is already sorted. Used when join conditions contain inequalitiy (>=, >, etc). Affected by the sort_area_size parameter. best for larger (>10,000 rows) when using RBO. cost= access cost of A + access cost of B +(sort cost of A + sort cost of B) You can force a sort-merge with hint */+ use_merge(table1 table2) */ - Cartesian Join/Cartesian product: used when two tables have no join condition. Every row in tablea is retrieved and for each one of them, every row in tableb is retrieved. Generally to be avoided, resultant of extra tables in the from clause without an associated where condition. (aka cross-product or cross products). You can force it by using /*+ ordered(table1 table2) */ - Hash Scan: - Anti Join: - Table Access Full/Full Table Scan: The entire table is read/scanned into memory to resolve the query. Not always Bad; if its a small table, or if you're returning a large percent of the rows (25% or more), table scans can be more efficient. Oracle reads all rows upto the "High water mark" (HWM) for the table. Severely affected by very high HWMs. Blocks are read sequentially, and the parameter db_file_multiblock_read_count can affect performance of FTS. Oracle defaults to a FTS when no suitable index exists, large % of data is to be read, its a small table (if the table is less than db_file_multiblock_read_count blocks under the HWM, it can be read in one i/o and FTS defaults), high degree of parallelism, and the use of the full hint. Index access methods: - Unique Scan: returns at most one rowid of data; only applicable if the index being read is a PK constraint index or a unique index - Fast Full Index Scan: occurs when an index exists that contains all the keys needed for the query. Faster than a normal index scan b/c it can use multiblock i/o, but data does not come back in sorted order. Good for count(*) or queries where you don't care about the order. - Unique Scan; Used when all columns of a unique b*tree index are specified in equality conditions. Basically traverses the b*tree by flowing through the leaf structures to find the exact match. - Range Scan: when returning a range scan of values form a unique index instead of individual results. Takes advantage of left-right linking of leaf pages for efficiency. Traverses the tree to find the beginning value, then scans to the right for the rest of the data. Data returned in ASC order. The default index access method, sine indexes are already sorted in ASC order. - Range Scan Descending: Reverses a range scan; traverses tree to find end, then scans left til it finds the beginning. Indexes are by default stored in ascending order; sometimes you want to find the LAST value first. - Index Skip Scan: improves on full index scans - Full Scan - Bitmap index Single value - Bitmap index Full scan - Bitmap Index Range Scan Conversions/Transformations - Bitmap Merge; where multiple bitmap indexes on a table return bitmap matched rowids and then the rowid result sets are merged together to deliver the result set. ??? great reference page in Oracle docs: http://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements006.html --- Q: What is a clustered object? What is clustering in oracle? A: (from the manual) "A cluster is a group of tables that share the same data blocks because they share common columns and are often used together. The benefit is to cluster tables commonly joined together for i/o performance improvement. You also save space b/c the "cluster key value" isn't stored over and over for similarly key'd values. Types of clusters: - Index clusters: default cluster method, storing data pages based on index value - Has clusters: stores rows based on the output of a hash function. --- Q: Is there a way to database-ize the trace files? A: Yes, you can recreate the trace analyzer tables as GTT tables to eliminate file system space issues. CREATE GLOBAL TEMPORARY TABLE trca$trace, etc --- Q: What are some caveats/good to knows about working w/ Indexes? A: - Oracle recommends only creating indexes on tables where you're querying for 2-4% of the rows. But then it says the break-even point for the optimizer to table-scan versus use an index is 25% of the rows. (sounds like the old sybase 20% myth). - Oracle automatically creates indexes on PK constrants. - If you define FKs on a table and do NOT index those columns, then any DML to the parent table will place a shared lock on the entire FK table. - Calculate the selectivity percentage of a field before determining the correct type of index to create (selectivity % = distinct values/total rows). If selectivity is very low, use a bitmap index. - If a composite index's columns match a query's columns exactly, the optimizer never has to look at the table. - Make the "leading" fields in a composite index match columns frequently used in where clauses. - Order the fields in a composite index from most selective to least selective (most distinct values to least number of distinct values, or from highest cardinality to lowest cardinality). --- Q: What is the break-even point for using an index versus just doing a full table scan of the data? A: 25% of the data. This may be an Oracle myth though. --- Q: What happens if you do not have indexes on FK fields? A: Bad things: - any DML to the parent table will place a shared lock on the entire FK table. - deadlocks can occur doing simple table inserts --- Q: Is it a myth that full table scans (FTS) are always bad, and that an index should always be used? A: Of course its a myth: a full table scan will always outperform an index scan when you're obtaining more than about 25% of the rows. --- Q: What are buffer busy waits (BBW)? How do you resolve them? A: Generally indicate contention for resources within the database. Run statspack, look at the "Buffer wait Statistics for DB" section for root cause. 4 root causes: (taken from c.d.o.s post 2/18/02 Ricky Sanchez (rsanchez@more.net)) - Data: if on tables: increase freelists, freelist groups. If occuring during updates, then redesign update statement to spread around update attempts. - Data: if on indexes, probably contention on a sequentially increasing sequence key with a PK index on it. Possible solutions are to use a "reverse key" index, or perhaps change the indexexing scheme. There also could be contention on the index pages b/c of too many indexes. - Segment Header; implement freelist groups, which creates additional blocks within the table for freelist information - Undo: probably due to too few rollback segments. Try increasing. - Undo Header waits: same --- Q: How can you detect insert contention on heap tables? What are some tips for tuning heap tables to avoid contention? A: 9/23/03 conversation on oracle-L, esp "Tanel Poder" Multiple concurrent inserts to the same table will cause "buffer busy waits" on the primary key index insert. Ways to avoid: - Reverse Key index: this will cause multiple inserts to go to different blocks. Caveat here is that range scans will cost far more than w/ normal indexes, since the blocks will be spread all over. - add a high-cardinality column to primary key, so that the same index blocks are not hit all the time. (use a "wrapping" fake sequential key, for example). - hash-partition your tables and indexes: this causes indexes to be spread over different partitions. - Increase # of freelists to match the max # of concurrent inserts - _bump_highwater_mark_count: the number of blocks to put on the free list whenever clean space is needed. Defaults to 5, could increase to 20. -- Q: What are the issues to be known when looking at buffer cache hit ratios (BCHR)? Is it an oracle myth that the BCHR must always be very high? A: Yes it is a myth: high BCHR usually indicate problems, not optimal performance Defined: BCHR = (LIO - PIO)/LIO - very high BCHR sometimes is indicative of bad sql constantly re-hitting the same blocks in your buffer cache. - very LOW BCHR in an OLTP environment probably indicates the need to increase your db_buffer_pool. - table scans cause blocks to be put in the LRU end of the memory chain, and thus causes your BCHR to be very low. Therefore low BCHR in DSS applications is not always bad. --- Q: Why does v$open_cursors show cursors that are long closed? Explicitly closed? A: Because Oracle caches cursors after they're closed. Set the undocumented server paramter "_close_cached_open_cursors" to "TRUE" to force this behavior (defaults to false). Apparently, this parameter used to exist in 8.0 and below, but was obsoleted in 8i. Which is why this went to an undocumented feature. In 8i, a new management algorithm was included that improved the cursor feature. Now, when a cursor is finished, it is set to "closed" but remains in memory. This allows the server to theoretically "reuse" the cached cursor code. If the same user sends the identical query back to the server, the code is existant in memory and does not need to be re-optimized/re-analyzed by the engine. The caveat of setting the parameter is; a performance hit on frequenly re-issued cursor queries. --- Q: What are some ways to tune your network connection to eliminate lots of dead sessions? A: - see Metalink document #44694.1 - OS tuning: tcp_keep_alive, other ndd settings - idle_time parameter in server: automatically disconnects idle sessions (timeout/time out value) - sqlnet.expire_time - check application coding, ensuring commit statements are done - autocommit on You can also tune the SDU (Session Data Unit) and the TDU (Transport data unit). Configurable by modifying the tnsnames.ora and/or listener.ora files. - SDU is the 'Session Data Unit', the size of the packets to send over the network. (aka, network packet size) - TDU, the 'Transport Data Unit', is the default packet size used within SQL*Net to group data together. Ideally, the TDU parameter should be a multiple of the SDU parameter. Else the TDU will force empty packets to be sent, wasting network resources. Note: standard ethernet networks use a MTU=1514 (token ring=4204). SDUTDU Defaultto 2048 bytes, maximum value of 32768 Examples: * SDU=1024, TDU=1536: SQL*Net will store up to 1536 bytes in a buffer and send this on the network. The lower network layer however will split this packet up into two physical packets of, 1024 and 512 bytes, and send these to its destination. * SDU=1514, TDU=1000: SQL*Net will store up to 1000 bytes and then send these to the lower network layer for distribution. This is a waste of network resources, since the SDU can store an additional 514 bytes per request. --- Q: What is default idle_time ? A: ??? --- Q: Where do network trace files get stored? A: $ORACLE_HOME/network/trace. They're in the form svr_[unix pid].trc. However, these trace files are not "tkprof-able." These files are generated by setting "trace_level_server" in sqlnet.ora ?? how to read them. --- Q: What is the old-style trick to prevent the RBO from using a known index on a column? Or forcing a table scan? A: Appending a blank string (or adding 0) to columns. examples: sql> select col1 from table where col1 = v_parameter+0; sql> select col2 from table where col2 = v_parameter || ''; --- Q: How big should I make my total memory? How big should my buffer pools be? A: - Total SGA = 1/3 to 1/2 of total machine RAM. - shared pool 2-3 times the size of buffer cache (about 100mb) - Buffer cache start at 4000 * block size (usually about 32mb) --- Q: Are there any downsides to having too large of an SGA? A: (arguments from Oracle-L discussion, 7/19/04) - If SGA is larger than your shared memory segment size, it will cause swapping and paging at the OS level. - Performance hits during checkpoints: more blocks in memory takes DBWx processes longer to clean dirty blocks - Performance impact in Free Buffer scans -- longer buffer chains - Performance impact on Cache Buffer Chains latch -- more buffers per latch means that the latch may be held more frequently - Delayed Block Cleanouts -- modified blocks remaining in memory requiring cleanups and causing potential ORA-1555s --- Q: How do you bind a table to a Cache buffer? How do you "cache" a table in your memory? Are there advantages? A: Old way (Oracle 7): use the "cache" clause. ex: sql> alter table test cache; - When you bind a table to cache, Oracle merely puts the table's contents at the MRU end of the DEFAULT chain. From there, if not used they'll just age their way off the chain. But, a table scan will blow them away anyway. New way (oracle 8 and beyond): bind the table to the "KEEP" pool. With the advent of KEEP and RECYCLE, the "cache" and "nocache" options are useless. --- Q: How do I make Keep, Recycle, alternative sized caches in memory? How do I create a keep pool? How do I create keep pool? A: ALTER SYSTEM SET db_cache_size='1500M' SCOPE=MEMORY; ALTER SYSTEM SET db_keep_cache_size='150M' SCOPE=MEMORY; To make them permanent, make sure to add lines like this to the pfile: *.db_keep_cache_size=31457280 *.db_recycle_cache_size=10485760 --- Q: How do you bind a table to the keep or recycle pools? A: SQL> alter table ehri20test.CRNT_RFRNC_DATA storage (buffer_pool keep); --- Q: How do you remove a table that you've previously altered and bound to a Cache? A: Several ways: 1. bind it back to default: SQL> alter table XXX storage (buffer_pool default); and then select * from it. or, take the underlying tablespace offline. or, shutdown the server :-) --- Q: How can you tell what tables are bound to what pools? How can you tell what is currently in the KEEP or RECYCLE pool? A: select * from dba_tables where buffer_pool <> 'DEFAULT'; same for dba_indexes --- Q: What are all the caching options available to you in dba_tables? A: select * from dba_tables, and referencing http://docs.oracle.com/cd/E11882_01/server.112/e40402/statviews_2117.htm#REFRN20286 - CACHE: Indicates whether the table is to be cached in the buffer cache (Y or N) - BUFFER_POOL: indicates what buffer pool the table is bound to: defaults to DEFAULT which is the main buffer pool. Can be KEEP or RECYCLE if specifically bound there. - FLASH_CACHE: Database Smart Flash Cache status: default, keep, or none (not applicable unless you have Flash) - CELL_FLASH_CACHE: Exadata specific; binding to Exadata flash. default, keep or none. - RESULT_CACHE: Result cache mode annotation for the table: default, force manual. --- Q: What is a good rule of thumb for finding candidates to bind to the Keep and Recycle caches? A: Rough rule: - tables 10% or smaller than your default pool: bind to KEEP - tables 200% or larter than default pool, bind to RECYCLE. Smarter: analyze each table by its usage, studying the sql access - bind frequently accessed tables/reference tables/frequently joined tables to KEEP - bind frequently table scanned tables, excessivly large tables to RECYCLE. Data Warehousing: - put all dimensions and their indexes in KEEP - put all facts in the recycle pool --- Q: Can you bind a table's indexes to the keep/recycle pools as well? A: Yes of course: ex: alter index ehri20test.CD_INDEX storage (buffer_pool keep); --- Q: Can you bind a materialized view (MV) to a keep/recycle pool? A: You bet sql> alter materialized view dw.mv_eeoc storage (buffer_pool keep); --- Q: What is the difference between "pinning" and object in the shared pool versus binding the object to the KEEP cache? A: One is for code, the other is for tables alter table XXX cache versus alter table XXX storage (buffer_pool keep); execute sys.dbms_shared_pool.keep('owner.procorpkgname'); --- Q: What is SQL Result Cache? /*+ result_cache */ and how do you use it? A: a new pool in 11g that awaits SQL results and stores them for subsequent use. It can enable amazingly fast performance. select /*+ result_cache */ from table; first time through the results are slow as the cache seeds, but there on will be amazing. --- Q: How do you configure Result Cache? A: 3 primary configuration settings: alter system set result_cache_mode=manual scope=both sid='*'; alter system set result_cache_max_size=400M scope=both sid='*'; alter system set result_cache_max_result=5 scope=both sid='*'; Pulls from SGA memory (specifically part of the shared pool). If you're using automatic SGA, you can tune this up and down dynamically. If shared pool is defined, you're going to be limited to 75% of the shared pool in terms of the max size you can set result_cache buffer. --- Q: How do you flush the result cache? A: exec dbms_result_cache.flush; --- Q: What is in the result cache? A: query the V$ views for result cache: select * from v$result_cache_objects; select * from v$result_cache_statistics; select * from v$result_cache_memory; select * from v$result_cache_dependency; set serveroutput on exec dbms_result_cache.memory_report; --- Q: where does result_cache come from? SGA or PGA? A: SGA, specifically the shared pool http://docs.oracle.com/cd/E11882_01/server.112/e40402/initparams220.htm#REFRN10272 Other useful links: http://www.oracle.com/technetwork/articles/datawarehouse/vallath-resultcache-rac-284280.html http://www.oracle.com/technetwork/articles/sql/11g-caching-pooling-088320.html --- Q: What is the difference between "Logical I/O" (LIO) and "Physical I/O?" (PIO) A: - Logical: db block gets + consistent gets. LIO is the optimizer's query as to whether a block of data is already in a buffer cache - Physical: physical reads: occurs when the block is NOT in a buffer cache, and a disk read is required. Logical I/O should always be the determining factor the Optimizer uses to choose between an index scan and a table scan. Note; you always do a LIO. But not all LIOs then require a PIO. --- Q: Why do I have a large number of "none" results in my v$sql/v$sqlarea for the optimizer_mode? What are the valid values for this A: The optimizer chooses "none" when sql becomes invalidated, or when pl/sql dependencies have changed, or when a stored proc is called w/ bad parameters. Also, soon after analyzing the optimizer has to re-build plans, causing a high percentage of "NONE" to appear. If lots of NONE appears and you HAVE NOT recently re-analyzed, then you might have problems. Valid values are any of the "optimizer hints" available. Most common are: - CHOOSE (uses the CBO) - NONE (as explained above) - RULE (RBO) Less frequently seen: - "MULTIPLE CHILDS PRESENT" (present in v$sqlarea when multiple v$sql optimizer paths were used) - ALL_ROWS (optimizer hint) - ... other optimizer hints. --- Q: What is Data row migration, how can I diagnose it, and how do I solve it? A: Data row migration occurs when updates to existing rows cause the rowsize to grow beyond the block size, forcing a "migration" of the entire row to a new extent. This causes excessive disk I/O doing selects from the table going forward. Diagnosis: - if you're doing lots of updates, and see increased performance degredation, you may be causing migration. - Analyze your table fully, look at the CHAIN_CNT field in dba_tables Note: both migrated rows and chained rows appear in the CHAIN_CNT field. - SELECT * FROM v$sysstat where name='table fetch continued row' ; if you have rows, you're either migrating or chained. - If you have chain_cnt records, CTAS the data into a new table and re-analyze. If the CHAIN_CNT goes to zero, you have (had?) migrated rows. Solution: - Immediate: CTAS data into new table, drop old, rename new to old. - (or a variation on the above if you're worried about RI: create new table as select, delete from old, insert into old select * from new) - Longer term: increase the pctfree. If you're doing updates, its bound to migrate if you have a low pctfree. Default of 10% --- Q: What is row chaining, how can I diagnose it, and how do I solve it? A: Row-chaining occurs when rows become so large, they do not fit into a single block and thus must be "chained" to another block. Similarly to migrated rows, this causes extra disk i/o overhead when querying or updating tables. Diagnosis: - Analyze your table fully, look at the CHAIN_CNT field in dba_tables Note: both migrated rows and chained rows appear in the CHAIN_CNT field. - analyze table table list chained rows into chained_rows; this will list any and all chained rows into a table called "chained_rows." (same issue as above; migrated rows get counted the same) - SELECT SUM(data_length) FROM dba_tab_cols WHERE owner='STG' AND table_name='STS_RCRD_IDSA' will show you if your absolute max data length possible can blow out your block size (remember; there's about 80bytes of overhead per block). - select count(col) from suspected_table and run this query at the same time: SELECT * FROM v$sysstat WHERE NAME = 'table fetch continued row' If you get Solution: - increate block size. Create a bigger block size'd tablespace --- Q: How do I determine if I have lock contention? Do I have enough locks configured for my system? A: ?? --- Q: Is it an Oracle myth that the smallest table drives a hash join? A: Yes apparently. ?? proof or details. --- Q: What can I do to improve performance on a very large hash join? (Ex: one that occurs when joining two large tables where most rows are selected from both tables) A: (from Oracle-L discussion 1/21/05) - increase hash_area_size; this will put most of the joining mechanism in memory and not in temp space on disk (obsolete in 9i if using pga_aggregate_target). This can be set dynamically. - parallel query; pull more of the data concurrently and ensure that the right number of parallel slaves are occuring - put the tables in a larger block size tablespace, with larger block size memory pools. - consider partition-wise joins - Increase db_file_multiblock_read_count to the max value for the O/S to the max the OS allows, to do tablescans as efficiently as possible - experiment w/ hidden _db_file_direct_io_count parameter --- Q: What is a "scattered read?" What is the difference between scattered and sequential reads in performace stast? What are the implications? A: Simply, - sequential reads are where the optimizer reads single blocks - scattered reads occurs when you do multi-block i/o What causes excessive wait times for these two fields in statspack? - Sequential; usually caused by index accesses, since we're looking for single blocks at a time. - Scattered: usually caused by many table scans, since you're reading multiple blocks at a time. --- Q: Is the "cost" value in Explain Plan outputs meaningless when comparing one plan to another? Is it an Oracle Myth that the cost values is useless? A: "Yes" cost is useless according to Tom Kyte. The "cost" value is internal to Oracle and cannot be used to compare one query to another. HOWEVER you *can* compare the costs reported by the exact same query, when the plans are different. Cost is a relative value only meaningful when you're analyzing the same query. Some Oracle experts claim the cost directly corresponds to the run-time of the query. However, there are documented reports of higher costing queries running more quickly than lower (times where a FTS is performed to more quickly return a query, when using the index would show a lower cost but slower response time). Better to use set autotrace on (instead of pure explain plan output) and look at these three values: - "Consistent Gets" field (which are in reality Logical I/Os). - "Card" fields: Cardinality of rows, the fewer the better, especially in OLTP - "Bytes" field: total bytes read, more important on table scans and in DW --- Q: What is the "cost" value a measure of? A: Centiseconds of response time (or "soft clock ticks.") However, it is well known that the cost parameters between two queries cannot be compared even up. The cost is only good for analyzing the SAME queries against each other after you've added indexes or hints. --- Q: How do you "archive" statistics on tables? Why would you want to do this? A: It can be useful to "save" your previous statistcs on tables if, after an analyze and regathering your performance goes south. It can be a quick fix to restore the "old" stats until you can identify the problem. See Note:117203.1 -- How to Use DBMS_STATS to Move Statistics to a Different Database --- Q: What is a "right-hand" index problem? A: From asktom's site: a rare condition occuring on clustered high-end systems where a monotonically increasing value is indexed, so all the additions go to the "right side" of the index and causes skewing. Rare b/c only occurs where lots of blocks are shared (clustering). Most cases like this see no performance issues. --- Q: What is the performance implications/space considerations one should take into account when using number(4) versus number (which defaults to number(38))? Does Oracle handle number(4) any differently than number(38)? Why even bother having a scale parameter on the number field? A: ??? things to consider - ODBC might have issues w/ nust dealing w/ "number" without a scale. --- Q: What is the % performance impact on operations of using MVs w/ MV logs, fast refresh and refresh on commit? A: ??? --- Q: What is the performance impact of Materialized View "refresh on commit" and the associated MV logs versus just using triggers? A: ??? --- Q: I'm getting the following message in my error log: "WARNING:Oracle instance running on a system with low open file descriptor limit. Tune your system to increase this limit to avoid severe performance degradation." What does it mean, whats the impact and how do I fix it? A: You've got your db_files parameter set too high. Oracle's max is 65534, but to handle this many files you'll have to set the rlim_fd_cur and rlim_fd_max in the /etc/system (or kernel of your O/S). In reality, Oracle will have issues with this many files. Tech support reports that this warning is printed when the formula " 2 * db_files + 82" is greater than your "current" open file descriptor parameter (rlim_fd_cur in solaris). Its just a warning, there is no actual degredation unless you actually have some huge amount of files. select count(*) from dba_data_files gives you a list of how many data files you're actually using. --- Q: How do I tell how much shared memory different objects are consuming? A: select * from V$DB_OBJECT_CACHE --- Q: How do I tell how much of my KEEP pool objects are consuming? A: ??? --- Q: How do you export statistics from one database/object and import them to another? A: see also Metalink note 117203.1 Steps: SQL> exec DBMS_STATS.CREATE_STAT_TABLE ('DW','FEH_STATS'); (note: this creates the table in the default TS of the user ... if you need to, move it to another TS for space purposes. If you move this table you'll have to rebuild its index) SQL> alter table dw.feh_stats move tablespace fact_s9_tab; SQL> alter index dw.feh_stats rebuild; SQL> exec dbms_stats.export_table_stats(ownname=>'DW',tabname=>'f_employee_history',partname=>NULL,stattab=>'FEH_STATS',cascade=>TRUE); then, export the stat table just created. $ exp usr/pwd@sid file='dumpfile.dmp' log='dumpfile.log' tables='FEH_STATS' statistics=none ftp/copy file to where it needs to go, import it $ imp usr/pwd@newsid file='dumpfile.dmp' log='dumpfile.log' fromuser='user' touser='user' now, import your new stats for the table. Delete the old ones first if you have any (or, if you're testing, save them off first using the above process) SQL> exec DBMS_STATS.DELETE_TABLE_STATS ('dw', 'f_employee_history'); SQL> exec dbms_stats.import_table_stats(ownname=>'DW',tabname=>'f_employee_history',partname=> NULL,stattab=>'FEH_STATS',cascade=>TRUE); Caveats to doing this process: - The two tables must match each other including number of partitions, else you'll get an error like "ORA-20000: Unable to set values for table F_EMPLOYEE_HISTORY: does not exist or insufficient privileges." You can find "extra" partitions in src but not in target like this: select * from feh_stats where c2 not in (select partition_name from dba_tab_partitions where table_owner='DW' and table_name='F_EMPLOYEE_HISTORY'); - You'll only get the ORA-20000 conflict messages about partitions/columns that actually ahve stastistics gathered on them in the source system. In other words, if you have column A in src but not in tgt, you'll still be able to export/import stats in full as long as there's no stats gathered on the particular column. --- Q: How do I configure a small but very frequently read table in a high contention environment? A: (from discussions april 06 on oracle-l). The problem is a hot block I/O issue - multiple table copies; nightmare if data is getting updated at any rate - use keep pool - convert to IOT (index organized table) - if table is small enough, put contents into a package variable instead of a table (this only works depending on how the table is accessed; if its accessed via stored procs, then this works great, b/c each proc will load the table and work from its own private copy each time). - convert table to be a hash cluster - partition the table across several I/O devices on the PK - put into 2048 byte TS (fewer rows per block) - alter pctfree to be much higher (90-95%) so there are fewer rows per block - use records_per_block feature to limit the number of rows to one per block, so that every insert/update only locks one block. (alter table test minimize records_per_block). --- Q: What do lock_table_stats/unlock_table_stats do? A: as they sound: the lock_* procedures either freezes the current statistics or (if they have never been collected) purposesly keeps those stats uncollected. You may want to do this on tables where the stats are known to be good and you don't want them overwritten by auto-gathering jobs. --- Q: What do the stars ("*") or asterisks mean in the query plan output? A: example: | 0 | SELECT STATEMENT | 1 | HASH GROUP BY |* 2 | HASH JOIN |* 3 | MAT_VIEW ACCESS FULL |* 4 | HASH JOIN | 5 | MERGE JOIN |* 6 | MAT_VIEW ACCESS BY INDEX ROWID Answer: lines marked with asterisks have predicate information further below the list of steps, containing access and filter methods (joins and where clauses). --- Q: What is the argument *against* creating covering indexes? A: ??? mentioned at a mysql conference as being debatable in oracle. Must investigate reasons for presenter bringing up. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Initialization Parameters =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= --- Q: Where is the init.ora file? A: Depends: 8i: - $ORACLE_HOME/dbs (if not following OFA recommendations) - $ORACLE_BASE/admin//pfile/init.ora (if following OFA recommendations) However, in 9i, this seems to contain the orig. install values, not the actual live file. If using method 2, $ORACLE_HOME/dbs needs to contain a link to the real file. Named init[SID}.ora. However it could be anywhere. You can have it be a link to another location, or have this line: "IFILE=/some/other/directory/init.ora" in the top. Why in the "pfile" directory? "Parameter file" for init.ora, since it contains serverwide parameters. Update: 9i: now seems to be $ORACLE_HOME/dbs/spfile[SID].ora In NT/XP: by default init and sp files are in $ORACLE_HOME/database. Oracle_HOME typically gets set to c:\oracle\ora92. --- Q: What are the default values for the init.ora file? A: Many are dynamically created based on values inputted during installation. In 8i and above,, you specified the "size" of your machine and the values were created for you. (values in parenthesis are recommended values form small, medium, large) db_files 1024 (80, 400, 1500) db_block_size 2048 on NT/XP, 8192 on Unix db_cache_size 9i: db_file_multiblock_read_count 8 (8, 16, 32) db_block_buffers 32768 (100, 550, 3200) java_pool_size 10mb (per doc), 20M 8i, 32M 9i job_queue_processes 0 (per doc), 10 8i? large_pool_size 0 8i, 8M 9i log_checkpoint_interval 10000 os blocks (5M on most unix systems), 0 (9i) log_checkpoint_timeout 900 seconds (8i), 1800 (8i w/ enterprise edition) log_buffer 32768 (32768,32768, 163840) mts_servers 1 open_cursors 64 optimizer_index_caching 0 optimizer_index_cost_adj 100 processes 50 8i, 150 9i (50,100,200) sessions 1.1*processes + 5 transactions 1.1*sessions shared_pool_size user-specified (3.5M, 5M, 9M) typically 8M on 32-bit systems 64mb on 64-bit systems shared_pool_reserved_size 10% of shared pool sort_area_size 65536 sort_area_retained_size 65536 --- Q: What is a block size strategy for different Oracle objects? How do I size db_block_size on my server? A: - Indexes: large (32k) b/c they get read sequentially and in large chunks. Caveat: "hot" indexes in 32k blocks with lots of inserts/updates will more likely have block contention (since larger blocks hold more index entries). - oltp tables or small tables: 2k-4k: to better utilize ram - dss tables: 32k - default: 8k Be careful; inserts into 32k block size tablespaces are *very* expensive. HOWEVER: these sizing considerations (oltp=small, dss=big) might be a myth. Since oracle dbf files often reside on unix file systems, the BEST thing to do is to match your block size to the file system size. If you're using direct i/o devices (as with NT or XP) or raw devices (on unix), there is no file system layer to worry about, so block size can be anything (but is suggested to be larger). Some DBAs argue that there is zero benefit to using different block sizes in your database. Others make good arguments that having a 32k block size for your index tablespaces almost serves as a private "recycle pool" for index operations. --- Q: How do you tune the shared_pool? Why is it important? What is it? A: (thanks in part for "Tanel Poder" 9/28/03 post) shared_pool is part of the sga and is comprised of: library cache, dictionary cache, buffers for parallel execution, control structures. The library cache includes sql, pl/sql, etc all available to all users. At initial create, half the pool size specified is "hidden" from active use and put on a freelist of available memory "chunks." Usage: three types of allocations can be made to the freelist of shared_pool: permanent, freeable and recreatable. tips: - select * from v$sgastat for different parts of shared_pool - show parameter shared_pool_size for initialization value - select sum ( bytes) / (1024*1024) from v$sgastat where pool = 'shared pool'; - dbms_shared_pool.keep: can mark objects for keeping in active - use cache, nocache hints to forcibly keep/dump blocks in memory (old way; now use KEEP/RECYCLE pools). - Library cache misses are far more expensive than buffer cache. Make sure the library cache hits are best. There is an old myth guideline that says if the library cache hit rate isn't >99%, you need to inrease the shared pool size. --- Q: Is there any performance degredation caused by having an overly large shared pool? A: apparently too large of a shared pool causes wait performance issues. --- Q: What is a great resource in metalink for shared pool analysis? A: Note:396940.1 --- Q: How do I properly size sort_area_size for performace? What are good tips to look for that tell me its time to inrease sort_area_size? A: Essentially, you want to tune the sort_area_size so that any and all sql result set sorting occurs in memory, and not on disk (aka, temp table). Defaults to 64k, should be much larger. 1M is usually a good starting value. Run statspack or the bstat/estat scripts and look at the number of disk sorts occuring. If you're getting lots of disk sorts, tune this value up. Large SQL "minus" operations will result in huge memory sorts, and if these are done frequently, a larger-than-average sort_area_size should be used. another way: compare values in v$sysstat; look at the ratio of memory sorts to disk sorts. Oracle advises 0 disk sorts in OLTP environments, which may be an unattainable goal. A third way: set events 10032, 10033 and study the trace output. If you configure this too large, the system will begin to swap/page badly. --- Q: What is the sort_area_retained_size? How big should it be? A: sort_area_retained_size tells the server how much of the sort_area_size is "protected" in memory, in case part of the sort_area_size is written to disk. It is recommended to configure this to be the same as sort_area_size. It defaults to the sort_area_size on installation. ?? Wait, the Oracle docs specifically say NOT to size it the same as sort_area_size in cases where you have a large number of concurrent users. Each sort running on the server grabs a memory area equivalent to the "sort_area_retained_size" and holds it A prior version to 8.1.7 had a bug which required these two values to be set to the same value. Fixed in 8.1.7. Note: in 9i, sort_area_size, hash_area_size and sort_area_retained_size are obsoleted if you use the pga_aggregate_target parameter and dedicated Oracle connections. --- Q: What is the shared_pool and why is it important? A: Shared_pool is the library cache (the executable image of recently referenced sql and pl/sql packages) and the dictionary cache (information from the data dictionary). These caches age out objects just like the db_cache, but cache hits here are far more expensive because they result in recompilations. thus its important to keep these pools somewhat large. --- Q: How do I tune shared_pool_size? A: in 10g, its done automatically. - select * from V$LIBRARYCACHE; look for reloads and invalidations - --- Q: What is shared_pool_reserved_size? Why is it important? A: The shared_pool_reserved_size is a chunk of the shared_pool left unused specifically to easily resolve large memory allocation requests. It should be sized large enough so that any request for memory on the reserved list can be satisfied without flushing any of the shared pool. Documentation recommends 10% of shared_pool_size be reserved. In 10g it is defaulted to 5% of memory. If you try to set it to more than 50% of the shared pool size it will fail. --- Q: How do I tune shared_pool_reserved_size? A: analyze V$SHARED_POOL_RESERVED table: - if request_failures is > zero and increasing, its too small. - if free_space is > 50% of the pool size, its too big; tune down. if you increase the reserved_size, make sure to increase the shared_pool_size too so that you don't have more than 10% allocated. --- Q: What is the large_pool? A: Creates a pool of memory in the SGA for "large" operations. If the large pool did not exist, the normal shared pool would be used by these operations and thus would tend to flush out objects you may want to stay in memory. - Required for MTS environments - typically used by RMAN for backup and recovery. If a large pool is enabled, it will also be used for these operations: - multiple DB writer slaves - parallel query execution buffers (when PARALLEL_AUTOMATIC_TUNING=TRUE) --- Q: What are some various init.ora parameters to look at when tuning a server? A: - bitmap_merge_area_size: defaults to 1Mb: specifies a chunk of memory to use to merg bitmaps retrieved from a range scan. Increase this if you do a lot of bitmap indexing or bitmap retreivals. - buffer_pool_keep, buffer_pool_recycle: see elsewhere - create_bitmap_area_size: defaults to 8mb: increase if you create lots of large cardinality bitmap indexes, and your index creation will be faster. Decrease and save the memory if you never use bitmap indexes or only create low-cardinality bitmap_indexes - db_block_buffers: see "Shared pool" question - db_block_lru_latches: also see "shared pool" question - db_block_size: see "block size" question - db_file_multiblock_read_count: see specific question - dml_locks, transactions: the dml_locks is calculated at 4*transactions and contruls the number of DML locks is available in the system. - hash_area_size 2*sort_area_size default; increase to allow for bigger hash joins. - hash_join_enabled == defaults to true: best join method - large_pool_size: see "large pool" question - log_buffer: defaults to the larger of 500k or (128k*number of cpus). Specifies the amount of memory used when buffering redo log entries. Larger amounts of log_buffer reduce I/O to the redo log files. - log_checkpoint_interval: specifies how often the system should force checkpoints (outside of those that occur b/c of log switches). Value is an expression of OS blocks (512 on solaris), and defaults to 10000 (on solaris). If the value of log_checkpoint_interval*512 > size of the redo log files, then checkpoints will never occur outside of log switches (preferred). Set to 0 to disable. - log_checkpoint_timeout: specifies in seconds how long the system must rest until another checkpoint can occur. Defaults to 900 on 8i (15 minutes), 1800 (30 minutes) on 8i enterprise edition - open_cursors: defaults to 64, usually want to increase for high end user activity. - optimizer_index_cost_adj, optimizer_index_caching: see specific question - pga_aggregate_target (9i): target it for system ram - SGA size (less 10% for unix tasks on the server). This setting will override/obsolete any and all settings for sort_area_size, hash_area_size and sort_area_retained_size. - processes: defaults to 24*max_parallel_servers or 120 usually. Increase if more than 120 OS user processes need to simaultaneously connect. - sessions,transactions: derived exactly from processes - shared_pool_size, shared_pool_reserved_size: see "shared pool" discussions - sort_area_size, sort_area_retained_size: see separate discussion - sql_trace, timed_os_statistics, timed_statistics: turn on to troubleshoot, off otherwise b/c of the overhead involved. - work_area_policy (9i): ?? --- Q: How should I tune the db_file_multiblock_read_count? How can I find the maximum db_file_multiblock_read_count value for my system? A: this paramter (8 default) specifies the maximum number of blocks read in one I/O operation during a sequential scan. DSS systems should always increase this value (16-64), as should OLTP systems (4-16) that end up doing large scans. Do not set it TOO high, else the optimizer may start favoring table scans over index usage. One test is this: Testing: set db_file_mutiblock_read_count to 128, set event 10046 level 12, then do a select * from large_table. Examine the trace file, looking at the wait statistics. The "p3" parameter will be the maximum number of blocks concurrently read ... tune the value accordingly. The max is OS specific. The maximum will be less than the system's max I/O size divided by the db_block_size. Attempts to set this higher than the calculated maximum will not crash the server; Oracle will jsut set it to the max. Therefore, one way to find your maximum is to set this value to some huge value, boot, and see how it gets set. - Rule of thumb: 64K/block size = your block count - Solaris: find maxphys in /etc/system and/or sd_max_xfer_size in /kernel/drv/sd.conf However, even tuning this manually, solaris's maxphys is 1mb (divided by a typical 8k block size = 128 typical max on solaris). Or use this command: # echo 'maxphys /D' | adb -k - Connor McDonald notes on c.d.o.s 10/31/02 posting that Oracle has a hard-limit maxphys (kernel parameter SSTIOMAX) of 1M in all releases of 8.0+, which makes the effective max db_file_multiblock_read_count 128 in all cases. - Steve Adams has a script called "multiblock_read_test.sql" on ixora.com.au that can help find it. - Note: this paramter is apparently deprecated in 11g. --- Q; Why are the values of optimizer_index_cost_adj and optimizer_index_caching considered so important while tuning queries?? A: (culled from several c.d.o.s. threads, some on 10/7/03) Defaults: optimizer_index_cost_adj: 100 optimizer_index_caching: 0 Explanation: - the optimizer_index_cost_adj is a percentage value, telling the optimizer how to "cost" index accesses versus regular table accesses. Default is 100 (in other words, telling the optimizer that index accesses "cost" the exact same as a table access). This causes the optimizer to sometimes choose table scans over index accesses in situations that call for it. Suggestions: Set it to 5-25 for an OLTP system (since you're doing specific updates and inserts, you want to use indexes), default (100) or more for a DSS system (since, theoretically a DSS system is doing huge scans of data, and sometimes table scans are good here). - The optimizer_index_caching controls the percentage of index blocks that the optimizer expects to be in the buffer cache for nested loop joins. Setting this higher makes nested loop joins seem less expensive. In OLTP environments, its more common to have index blocks already in cache. Nested loops are the best way for users to work on small subsets of data (as they do in OLTP). Suggestions: Defaults to 0, most suggest 20-30 for OLTP systems. DSS: Low; default of 0 or at most 10. --- Q: What are the other optimizer parameters to consider manipulating? A: Per oracle-L post 10/11/03 Gaja Krishna Vaidyanatha Don't mess with these unless you have good reason to: - optimizer_dynamic_sampling: 9i feature: 4 is sufficent (? default) - optimizer_features_enable: defaults to current version of Oracle. Only manipulate if you're trying to emulate older optimizer behavior (as some have done in 9i). See optimizer_mode - optimizer_index_caching: see above - optimizer_index_cost_adj: see above - optimizer_max_permutations: defaults to 80,000, testing has shown that setting it far lower (2000) has a positive impact on generated plans. - optimizer_mode: Choose default in 8i, ALL_ROWS default in 9i. - optimizer_percent_parallel: 100 default on 8i (0 default in 9i?) - optimizer_search_limit: default to 5; specifies the max number of tables to join when doing cartesian products. --- Q: What is the actual formula the CBO uses to calculate the cost of a Query? A: published by Wolfgang Breitling at IOUG-A 2002: cost = blevel + ceil(selectivity * leaf_blocks) + ceil(selectivity * clustering_factor) this value is then compared to the cost of a tablescan for the particular table Table scan Cost = High Water Mark / (adjusted db_file_multiblock_read_count) --- Q: How do I set 9i's optimizer to use the 8i behavior for troubleshooting? A: - You can set optimized_feature_enable=8.1.7 to return optimizer to 8i perf. or is syntax "optimizer_features_enable=8.1.7"? --- Q: How do I get a list of the undocumented parameters in Oracle? Q: How do I see all hidden parameters in Oracle? A: a convoluted query accessing x$ksppi, x$ksppcv, and x$ksppsv. see oracle_admin.sql for full syntax. --- Q: What are some undocumented parameters in Oracle whose values changed from 8i to 9i, and may cause us some problems? A: Note; changing these is unsupported by Oracle tech support. - _UNNEST_SUBQUERY: false in 8i, true in 9i - _ORDERED_NESTED_LOOP: false in 8i, true in 9i - _ALWAYS_SEMI_JOIN: off in 8i, on in 9 - _B_TREE_BITMAP_PLANS: false in 8i, true in 9i. set this to false if you do NOT want the optimizer to ever consider using a bitmap data access method - _old_connect_by_enabled: true in 8i, false in 9i --- Q: How do you set an undocumented parameter? A: alter session set "_UNNEST_SUBQUERY" = FALSE; --- Q: What are the different buffer pools? And how do they work? A: - default: MRU/LRU chain of memory blocks that gets aged off in a queue fashion - keep: a reserved area of memory blocks at the "MRU" end of the default chain, which is designed to keep frequently used blocks in memory longer - recycle: a reserved area at the "LRU" end of the default chain, designed to be a holding area for blocks we have no desire of keeping. --- Q: How do you bind a table to a buffer pool? Are there advantages? A: Two ways: 1. at table create. Example: sql> create table test (col1 int, col2 char(5)) storage (buffer_pool keep) ...; 2. post table create. sql> alter table test storage (buffer_pool keep); Keep/Recycle/Default pool strategy: - Tables smaller than 10% of your default pool are candidates for KEEP pool. (and of course, which are frequently accessed). - Frequently accessed and smaller tables are good KEEP candidates - Tables larger than 200% of your default pool are candidates for the RECYCLE pool - Large, frequently table scanned tables are good RECYCLE candidates Of course, if a table is frequently accessed, one can argue that it will *already* be in memory anyway, so why waste time binding it to the KEEP pool. However, a large table scan can easily blow out any and all blocks in the default pool, hence the reason to bind it to the KEEP pool in the first place. --- Q: What is a good system buffer pool sizing strategy? A: - Keep at 50% of total buffers, Recycle at 10%, default at 40% - 8i init.ora example: db_block_buffers=50000 db_block_lru_latch=4 buffer_pool_keep = (buffers:25000, lru_latches=1) buffer_pool_recycle = (buffers:5000, lru_latches=1) In 9i: example db_cache_size=50000 db_keep_cache_size=25000 db_recycle_cache_size=5000 - Remember to increase db_block_lru_latch: each pool needs its own latch to function properly. The MAX this value can be set to is 2*cpus. Oracle defaults the value to 1/2*cpus. --- Q: When a document says that you need to "Relink the RDBMS," what does that mean? A: From Metalink note 131321.1; it means literally re-linking all dynamic binaries with new library files. Rough steps: - log in as user oracle - confirm environment vars are properly set: ORACLE_HOME, LD_LIBRARY_PATH and possibly other library paths (LD_LIBRARY_PATH_64) - set umask to 022 - $ORACLE_HOME/bin/relink all Suggest piping all the output of relink to a file like this: relink all > out.log 2>&1 and then searching through the file for keyword "Error" --- Q: How can I tell what my license high water mark is without shutting down/restarting? A: query v$license or v$resource_limit. --- Q: How do I see what parameters have been modified in my server? What are all my non-default parameters A: two ways: - alert log prints out all non-default parameters when booting - select * from v$parameter where isdefault='FALSE' order by 2; --- Q: Is it true that in 9i and higher, if you use pga_aggregate_target, that all sort_ varibles (sort_area_size, etc) are ignored? A: Yes; if pga_aggregate_target is set, then the variable "WORKAREA_SIZE_POLICY" is set to AUTO and following variables are ignored: - bitmap_merge_area_size - create_bitmap_area_size - hash_area_size - sort_area_retained_size - sort_area_size If you set pga_aggregate_target=0, then the policy becomes "manual" and these _area_size variables become the configuration parameters. If you leave pga_aggregate_target as default, then Oracle automatically sizes it at 20% of SGA. --- Q: What are good starting values for pga_aggregate_target? A: per Oracle's P&T Guide - OLTP: small % of your total memory usage: 20% of available shared memory - DSS/DW: large % of total memory; 80% of available shared memory However, in 10g v$pga_target_advice exists and can be used as a tool to set the target more efficiently. Follow the cache_hit percentage until it levels out and resize pga_aggregate_target appropriately. --- Q: In 10g, what is implication of sga_max_size, sga_target and the ASMM Automated System Management features of 10g? A: o sga_max_size: internally calculated based on the values of sga_target and pga_aggregegate_target. For documentation purposes only o sga_target: specifies the total value of all SGA components. If set, then the following area automatically sized: - db_cache_size - shared_pool_size - large_pool_size - java_pool_size - streams_pool_size If you manually set one of these 5 values, that serves as a minimum value before the Automated Shared Memory Management (ASMM) system takes over. Note: in 10g certain memory parameters print out as modified in the alert log (__shared_pool_size, __java_pool_size) despite not having a manual setting in pfile/spfile. These are (probably) printouts of the automatically sized versions of these You'll still have to size manually things like log_buffer, keep/recycle pools, 32k/16k other sized caches. --- Q: After setting sga_max_size I get this error in my alert log: "WARNING: oradism not set up correctly" ... how do I fix it? A: permissions issue on the oradism executable. per Metalink Note:374367.1: (as root) 1- cd $ORACLE_HOME/bin 2- chmod 4550 oradism 3- chmod g+s oradism 4- chown root:dba oradism 5- Bounce the database --- Q: can I set sga and pga targets dynamically? sga_max_size, pga_aggregate_target, sga_target? A: - Sga_max_size is not dynamic; requires reboot - pga_aggregate_target is dynamic --- Q: What are some good Metalnk docs related to ASMM? A: Note:295626.1: How To Use Automatic Shared Memory Management (ASMM) In Oracle10g Note:257643.1: Oracle Database 10g Automated SGA Memory Tuning -- Q: Can you get a history of parameter changes in your server? A: Yes; complicated query against stats$parameter within statspack. sp_parm_changes.sql from Tim gorman's site http://www.evdbt.com/tools.htm#sp_parm_changes =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Archive Log/Redo Logs/Logging =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= --- Q: How can i tell if my database is in archive log mode? A: old way: 9i and below: show parameter archive and look for the "log_archive_start" parameter. If it was true, you're running in archive log mode. If False, then not. new way: 10g and above: the log_archive_start command is depricated and you can bring a database to archivelog mode just by alter database. The best way seems to be the existance of a populated log_archive_dest parameter. Another way: select * from v$archived_log order by completion_time desc; --- Q: What do the "%" options mean in the log_archive_format variable? A: (select * from v$log_history to see the system-representation of these values) - %t: thread number: the engine number switching the logfile. - %T: Thread number, zero padded - %s: log sequence number: this is the actual number of the redo-log, indicating the number of times it has switched. This starts from time-zero of the server's creation. - %S: the same as %s but zero padded (legal? I don't believe so: but I've seen these in archive_log formats) - %d: - %ts: ignored - %p: - %%ORACLE_SID%%: ignored. Oracle default seems to be "%t_%s.dbf" I usually set it to: *.log_archive_format="log%s_%t.arc" --- Q: How can you estimate how much disk space one will need to hold archive logs? A: Calculate how often your redo logs switch, combined with their sizes, to estimate your daily log space usage. Example - log switches occur every 30 minutes - your redo logs are sized at 32mb each (30 mins / 24 hours = 48 switches every day) 48 * 32mb = 1.6gb daily conservatively. --- Q: What is the process of logging that occurs in Oracle? A: - while working, or before a commit, changes are stored in the rollback segments. Once dml is enacted, (almost) every change is recorded in the online redo logs. (select * from v$logfile to see locations). Thus you want to separate rollback from redo. LGWR is the process that uses these files. - Once enough changes have been written to one particular redo log, a "log switch" occurs, and the "next" log file begins being used. There are always at least two redo log files ... hopefully more. - When this switch occurs, Oracle confirms that all blocks in the database related to that particular redo log are checkpointed to disk. This confirms that the redo log is no longer necessary and thus can be overwritten (as it will eventually be). This eliminates the need for that redo log for crash recovery too. - If you have archivelog mode turned on, the first redo log is written (show paramater log_archive_dest to see location). These log files will continue to accumulate and must be periodically cleaned. - the online redo logs are overwritten automatically; there's never a need to "dump transaction log" as with sybase. However, Online logs cannot be overwritten til they've been purged to archive logs. --- Q: What order are Oracle objects written to when you commit? What is the order of objects being written? A: ??? not confirmed, believed to be: Summary: - temp and undo/rollback during transactions - redo - control file (if necessary) - blocks in memory - checkpointed to table and indexes periodically - archive when redo filled, logswitch triggers checkpoint --- Q: How do you setup Archive logging on a server? How do you START Archive Logging? How do you turn on archive logging? A: - create a target directory to hold your archive logs. - (9i) create pfile from spfile; - Add these lines to your init.ora parameter file: *.log_archive_dest='/raid/oradata/archive_logs/charlie' *.log_archive_start='TRUE' *.log_archive_format="log%s_%t.arc" - (as sys) shutdown the database - (9i) create spfile from pfile (since you've modified the pfile and need the spfile to be updated to be able to boot). - (9i): startup mount exclusive - alter database archivelog; - shutdown and startup again OR alter database open; to test whether its working: - alter system switch logfile; this forces a logfile switch and should write a log file to the directory specified. 10g: log_archive_start is depricated. Use these steps: $ sqlplus / as sysdba SQL> create pfile from spfile; SQL> shutdown immediate now, vi the pfile and add in these lines (example from Oraprod) *.log_archive_dest_1='LOCATION=/u06/oradata/ORAPROD/arch'#11/6/07: changed to /u06 *.log_archive_format='%t_%s_%r.arc'#11/6/07: changed filenames for consistency *.log_archive_start=TRUE $ sqlplus / as sysdba SQL> create spfile from pfile; SQL> startup mount; SQL> alter database archivelog; SQL> alter database open; Archive logging should be configured and started. Test by executing this: SQL> alter system switch logfile; and ensuring that an archive log file was created in the specified location. Feb/10: the use of "log_archive_start" is now depricated. SQL> startup mount SQL> alter database archivelog; SQL> alter database open; --- Then, recovery will look like this: $ sqlplus /nolog SQL> recover ORA-00279: change 4292870 generated at 04/20/2004 15:25:34 needed for thread 1 ORA-00289: suggestion : /raid/oradata/archive_logs/charlie1/charlie1log1_25.arc ORA-00280: change 4292870 for thread 1 is in sequence #25 Specify log: {=suggested | filename | AUTO | CANCEL} /raid/oradata/tmp/charlie1log1_25.arc Log applied. Media recovery complete. SQL> shutdown ORA-01109: database not open SQL> startup ... --- Q: How do you STOP archive logging? A: you can't just turn off the log_archive_start parameter in the pfile... (8i/9i directions): set Log_archive_start = false in your initSID.ora file shutdown server startup mount exclusive alter database noarchivelog alter database open; (10g) ?? --- Q: What are non-logged database operations in Oracle? A: similar ones to sybase - insert into table select ... from ... - create table as select ... unrecoverable - truncate table ?? - Any "nologging" alter on an object - you can always set "noarchivelog mode" during a load to "save" the time doing redo logging, but the database will need to be backed up afterwards to be considered valid with your archive logs going forward. - You can set "_disable_logging = TRUE" for certain loads, and you'll probably see 15-25% performance increase. - to prevent any nonlogging operations from corrupting your Data Guard, enable "Force logging" at the datbase level. alter database force_logging=true; --- Q: What is impact of creating an index with or with out logging? ex: create index x_name on table (columns) nologging? A: Simple: nologging does not actually log the index creation. There's really no reason to ever create an index w/ logging. --- Q: Does "nologging" really mean nothing is logged? A: Well, no. Its "less-logging." Even when creating a tablespace nologging, certain operations still generate redo log activity. There are only three truly no-logging operations: - CTAS: create table as select ... unrecoverable - insert /*append*/ into table nologging - sqlloader direct-load insert. What about create index ... unrecoverable? Answer: depricated command, replaced by nologging in versions 9i and above. --- Q: You cannot combine nologging and unrecoverable with a CTAS statement. Why and which is better to use? A: Ah, I see why: the manuals for 9i say that recoverable/unrecoverable are depricated words, and that nologging is the replacement. Thus, nologging is the correct option. --- Q: What are the statuses of the redo logs (as seen in v$log)? A: - Current: current redo log being used - Inactive: available, waiting for log switch - Stale: incomplete log; sometimes caused by shutdown. Not normally a problem if the database comes up properly the next time - Unused: never been written to - Invalid: too many errors writing to it, must be dropped and recreated. --- Q: How big should I make my redo logs? A: "It depends" - one the one hand, you want BIG redo logs so that you're not switching all day long. A switch is a huge I/O hog (it causes a checkpoint in the database, forcing every dirty buffer to be written). some DBAs don't think you should switch during the day, at all (instead relying on a cron job to issue a manual "alter database switch logfile" in the middle of the night). - However, one large redo file will make for a ridiculously long recovery period, if it ever occurs, as the SMON process will have to roll forward or back an entire day's worth of transactions. --- Q: how often should my redo logs switch? A: Rule of thumb: redo logs should switch no more than every 30 minutes. An hourly switch is better for DSS systems. Oracle recommends no more than every 20 minutes. SO, size your log files accordingly. --- Q: Is it good practice to force my redo logs to switch every X minutes? A: it depends on whether you have a good business reason. Good reasons include: - wanting to force switches to keep Data Guard standby up-to-date - wanting to switch to force writing of archive logs for backup purposes b/c you're running on NFS Good discussion on the topic. https://asktom.oracle.com/pls/apex/f?p=100:11:::NO:RP:P11_QUESTION_ID:810179042034 The parameter Archive_lag_target will force Oracle to switch logfiles at a specified interval --- Q: How do I resize my redo logs? A: create new ones, drop the old ones sql> alter database add logfile group X '/path/filename' size BIGM sql> alter database drop logfile group X How do you do this, if you don't have the space on your filesystem?? You can just summarily drop logfiles if you've got multiples. You'll have to alter system switch logfile until you get the logfile you'd like to drop to be completely clear of the recovery thread. --- Q: How do I add new redo-log files? How do I Move redo logs? A: same process as resizing: To get information sql> select * from v$logfile; alter database add logfile group 4 '/raid/oradata/CHARLIE/redo04.log' size 500M; alter database drop logfile 'old log filename' (or, if the existing logfile is online) alter system switch logfile; Notes: You'll probably have to switch the system logfiles several times during this process. AND, Oracle can only have 5 logfiles active at once, so when you create your fifth, you'll have to delete existing ones before moving on. -- Q: How do I just drop a member of a logfile group? A: alter database drop logfile member '/export/home/oracle/oradata/EHRISTG/redo02b.rdo'; get candidates from select * from v$logfile; --- Q: Is it a myth that one must always backup online redo logs? A: No! Redo Logs are critical to recovering after a database crash, as they are use to "roll forward" any changes made to a table that hasn't been checkpointed to disk. Absolutely critical, hence why they should be multiplexed. --- Q: What is logminer? A: a 3rd party product that can give useful information about Oracle logs. Example: used to determine the exact date/timestamp for point-in-time recoveries. Not a standard part of Oracle, unfortunately. --- Q: I have a job that's so large its filling up the Redo log space. What can I do? A: - Increase log space (cop-out oracle tech support answer) - Attempt to commit mid-stream (though, why does more frequent committing somtimes result in MORE logging??) - alter table nologging ??? Is this supposed to be for redo or for undo? --- Q: How can I measure the amount of redo space being generated for a particular insert? A: - set autotrace on in sqlplus before doing insert - select * from v$mystat Redo is generated for the actual insert, the indexes, and the undo tablespace, ehnce an insert to a table w/ avg length of 17 can generated 450bytes of redo. --- Q: How can I dynamically change my archive log destination? A: configuring a new destination and deferring the previous. run this sequence: Takes affect immediately. Test with alter system switch logfile; 9i and below: ALTER SYSTEM SET LOG_ARCHIVE_DEST_2="LOCATION=/u01/oradata/ORAIMPL/archive_logs" ; ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=DEFER; ALTER SYSTEM ARCHIVE LOG STOP ; ALTER SYSTEM ARCHIVE LOG START ; ALTER SYSTEM SWITCH LOGFILE ; SELECT * FROM V$ARCHIVE_DEST ; and confirm at the o/s level a new archive log appearst in the new location 10g: archive log stop/start depricated, use ALTER DATABASE ARCHIVELOG --- Q: What does _disable_logging_parameter do? A: as it sounds, it disables logging in a server. However, its inclusion is only for benchmarking purposes and should *NEVER* be done on a real system. --- Q: What is the difference between issuing: SQL> alter system switch logfile; and SQL> alter system archive log current; A: None, if you're using archive log mode. The first obviously forces a log switch, which ends up causing a new archive log to be written. The second command forces the existing redo log to be archived, which forces the first command anyway. ---- Q: What is the performance impact of using Archive Logging? A: 0% if you've properly configured your database objects. Archive logging is all about copying files from one disk to another. If you ensure that LGWR process is never accessing the same I/O objects as the ARCH process, then you'll be fine. When a redo log fills up and switches, the ARCH process simply needs to copy that redo log file to the archive log location. asktom example: You need sufficient devices to avoid contention here. You want to make it so that when LGWR is writing to a device, ARCH is *not* reading that device. So, you would have log group 1 on dev1 (mirrored to dev3). log group 2 on dev2 ( mirrored to dev4). log group 3 on dev1/dev3, log group 4 on dev2/dev4 and so on. Well, LGWR writes to dev1/dev3. Arch is reading dev2/dev4 and writing to dev5. Arch finishes and waits. LGWR not writes to dev2/dev4, Arch reads dev1/dev3 and writes to dev5. No contention, there you go -- smooth operation, no degradation. --- Q: What does alert error message "cannot allocate new log, archival required" indicate? A: Your arch process is running behind. Allocate more arch processes. --- Q: what does "Private strand flush not complete" indicate in the alert log? A: See Note: 372557.1 These are normal messages written to busy systems during log switches; in essence the database hasn't finished writing redo information to the log when a switch is initiated. Per the above metalink note: The only reason for concern with these messages is if there is a significant gap between the "cannot allocate new log" message and the "advanced to log sequence" message. --- Q: I'm multiplexing my redo logs but its causing me contention because I have only a limited number of I/O devices. What are the trade offs? A: Since redo logs are critical to recovery operations/unexpected shutdowns, oracle recommends they be multiplexed. Tradeoffs are I/O contention --- Q: I want to rename a redo logfile physically. how do I do that? A: Adapted from Metalink document, also documented in Administrator's Guide 10g (chapter 6). SQL> shutdown immediate; Move/Rename the files at the O/S level. $ cd /u01/oradata/SPENDDEV $ mv redo3a.log redo03a.log $ cd /u04/oradata/SPENDDEV/ $ mv redo3b.log redo03b.log Startup the database, mount, but do not open it. SQL> STARTUP MOUNT alter database rename file '/u01/oradata/SPENDDEV/redo3a.log','/u04/oradata/SPENDDEV/redo3b.log' to '/u01/oradata/SPENDDEV/redo03a.log','/u04/oradata/SPENDDEV/redo03b.log'; If you have any syntax errors or wrong directories, it'll fail. Otherwise Database Altered. Once fixed: SQL> ALTER DATABASE OPEN; =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Backup/Recovery =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= --- Q: What is a "hot" backup versus "cold" backup? A: - "cold" means the system is unused at the time: tablespaces are "offline" or (preferably) the Oracle database is shutdown. - "Hot" is the reverse: tables are active. Tablespace should be in "backup" mode, which allows access by users but freezes the datafile header. alter tablespace begin backup; /* starts backup mode */ ... copy the .dbf files underneath alter tablespace end backup; /* puts back in normal mode */ --- Q: How can you tell if a tablespace is already in backup mode? A: select * from v$backup; If status='ACTIVE' you're in hot backup mode. If stauts='NOT ACTIVE' its in normal mode. Join to dba_data_files to get actual file names. SELECT d.file_name, d.tablespace_name, decode(b.status,'ACTIVE','begin backup on', 'NOT ACTIVE','end backup') FROM v$backup b, DBA_DATA_FILES d WHERE b.FILE# = d.file_id ORDER BY tablespace_name, file_name --- Q: Is there any "danger" in leaving a somewhat idle server in hot backup mode for an extended period of time? A: ??? --- Q: Is there a difference between working on a tablespace file when its "offline" versus when the server is completely shut off? A: No difference; offline'd tablespaces are unaccessible and are treated as if the database were shutdown. Underlying dbf files are capabile of being moved or gzipped (though if you move them, the tablespace won't come back online unless you rename the datafile within the server). --- Q: What happens when a tablespace is put in "backup" mode? (alter tablespace begin backup) A: - Oracle freezes the SCN (System Change Number) of the datafile underlying the tablespace. - Users can still access objects in the tablespace, and make changes. - Once the backup has ended, the redo logs bring the tablespace uptodate. - If using archive log mode, and the redo log fills up, then both the archive log and the redo log shall be used seamlessly to "catch up" the datafile. Its an oracle myth that all changes to the data files are "cached" somewhere. Apparently data files continue to get updated. So, how do you recover? If you crashed while the tablespace was in hot backup mode, you could "restore" the TS from the backup file and then the redo logs would replay all transactions to return the TS to current mode during the "recover" of the TS. --- Q: What is a good backup strategy? A: - nightly "hot backups." Use script to put tablespace in "backup" mode and then copy the underlying data files. - 3 separate rman steps to backup database, archivelogs and then controlfile. - semi weekly cold backups: quick sql script to log in, put a tablespace in offline mode, then do os-level copy, then sql script to return tablespace to online mode. - weekly full exports (exp) - keep two "rounds" of archived log files. (normally archive logs become moot once a backup has been done). - backups of archived logs. Use multiple locations to ensure that corruption in the archive log does not compromise recovery. - online backups of controlfiles (both binary and ascii) binary: alter database backup controlfile to '/my/directory'; ascii: alter database backup controlfile to trace (this causes the control file to be written to the user_dump_dest directory in the format ora_nnnnnn.trc Notes: - if running in archivelog mode, never really a need for "cold" backups. Just do hot backups and do point-in-time recovery. - Do NOT depend on exp for recovery; imp is notoriously finicky - Archive logs - backup have same relationship as Sybase's transaction logs and database dumps. Once you have a backup, the previous archive logs are unnecessary. - Once a backup strategy is in place, its a great idea to do a test recovery. Delete a table and try to restore it. Delete some data and try to do point in time recoveries (if using archive log mode). --- Q: Can you do a table-level recovery in Oracle? A: Yes: using an exp full dump file. exp user/pwd@sid tables='table1,table2,...' file='dumpfile.dmp' imp ehri20curr/ehri20curr@ehrius file='cpdfhl01_021004.dmp' tables='boss' fromuser='cpdfhl01' \ touser='ehri20curr' --- Q: Can you do a table-level recovery in Oracle using RMAN? A: NO, not even in 10g. the best you can do is a point-in-time recovery on the entirety of the tablespace the table resides in, but NO object level recovery. In 11g and beyond, "Flashback Database" or "Flashback Object" gives the ability to do object-level recoveries. --- Q: What is a good Disaster Recovery plan for an Oracle database? A: - COLD backups with archived redo logs. - copies of controlfiles, both binary and ascii - ddl for the tablespaces - copies of full database schema: all ddl for tables, indexes, keys, and constraints - copies of all pl/sql in database: triggers, procedures, functions - copies of all .ora files customized: init.ora, tnslistener, listener, etc - exported data strictly as a backup, not to depend on! And of course, having all these located OFFSITE or at least off the target machine. (Note; if the entire machine dies, having system backups, disk formats, and other system-level disaster recovery issues becomes your first priority). --- Q: How do you tell how often log switches (between different redo logs) occurs? (Important b/c frequent log switches means the logs are too small) A: ?? there doesn't seem to be a good way of telling, besides monitoring the alert file or checking the date/time stamps of the redo logs. - You can force a switch with "switch logfile" command (though you really don't have much of a reason to do this). --- Q: What are typical disaster recovery steps to restore database? A: (from oracle-L post 10/24/03 "Mercadante, Thomas F" ) - restore oracle software from tape ($ORACLE_HOME) - restore config files (init.ora , listener.ora, tnsnames.ora, etc). - startup instance with nomount. - run Rman to restore the controlfile from tape (see below if doing by script) - Alter database mount - run Rman to restore database files - alter database open resetlogs. - perform a brand-new Rman backup (database, logs & controlfile) to save what you've done. --- Q: What is the process for creating a new server from backups of an old? A: Started from oracle-l posting 10/23/03 by "Kuipers, Rene" , then added in lazydba.com discussion - install oracle software (same version as source machine) on target - obtain cold backups from source server (alter tablespace begin backup) - copy data files (.dbf) to target server - reset/end backup on source server (alter tablespace end backup) - on Target, backup the controlfile to trace (alter database backup controlfile to trace) and then edit the file by hand to change db-name, locations to be correct for new target machine. - start target instance (startup nomount) - run the edited tracefile on target - create the controlfile using the controlfile-script (??) - in svrmgrl> mount the db, recover using backup controlfile - open the target db RESETLOGS (??) alter database backup controlfile to trace; You'll get a tracefile which contains the CREATE CONTROLFILE command. Copy the tracefile over, edit it when necessary, startup nomount of the target database, run the tracefile and presto. --- Q: What do I have to do to a script-based controlfile to prepare it for loading? A: - delete lines from top til "startup nomount" - add in a connect string so the script can be run from command line Now the script can be called directly from sqlplus sql> @ora_nnnnnnn.trc --- Q: Why are control files so important? A: restoring control files from tape will invalidate the database. Instead, change the init.ora parameter to point at the correct location for the control file and use the existing one. Then, if control file is correct, RMAN knows where all the backups are and thus can handle the extraction --- Q: How do I find out where my control files are? A: select * from v$controlfile; --- Q: How do I do an object level recovery with no backup and no archive logs? A: Probably not possible: - If on 9i, and if you're using automatic undo management, you can use a "flashback query." However, the user must have started dbms_flashback on the session in question, - you could use logminer, but only if a logswitch hasn't occurred. --- Q: While doing, recover database using backup controlfile, I get log corruption errors (ORA-00353). What can I do? A: try _allow_restlogs_corruption=TRUE --- Q: How do you backup just one partition? A: exp with partition keyword. --- Q: Is it a myth that Oracle recommends at least a monthly cold backup? A: Yes of course it is: the whole purpose of archive logging was to eliminate the need for outages. --- Q: What are the steps involved in a single tablespace/datafile recovery, without RMAN? What are symptoms and steps? A: 1. denote you have a corrupted/missing datafile. When you startup you'll get a message like this: idle> startup ORACLE instance started. Total System Global Area 320309728 bytes Fixed Size 731616 bytes Variable Size 285212672 bytes Database Buffers 33554432 bytes Redo Buffers 811008 bytes Database mounted. ORA-01157: cannot identify/lock data file 10 - see DBWR trace file ORA-01110: data file 10: '/u01/oradata/dw2/boss_test_01.dbf' and the alert log will have: ORA-01157: cannot identify/lock data file 62 - see DBWR trace file ORA-01110: data file 62: '/u05/oradata/dw2/boss_test_03.dbf' ORA-27037: unable to obtain file status SVR4 Error: 2: No such file or directory Additional information: 3 2. shutdown the server (otherwise QMN0 will attempt to restart every 5 minutes) 3. restore the files from backup 4. Startup the server --- Q: What are the steps involved in physically moving a database from one server to another? A: - alter database backup controlfile to trace (for safe keeping only) - shutdown oracle on ServerA and tar everything under /oracle to a tape/tar file - on ServerB, untar the tape to /oracle - startup mount - alter database open resetlogs. (or recover as necessary) --- Q: I've got a corrupted data file that i've offlined and can't get Oracle to read. Is there any way to salvage the data? A: No easy way. - Oracle has a tool called "BBED" (Block Browser/Editor) that normally is only used by their tech support guys. You can engage Oracle consultants to come onsite and be prepared to spend thousands of dollars to have the tool unload data to sql*loader files. Or you can do some investigation yourself. - DUL: data unloader, a tool that worked very well reading 8i data files, but has never been upgraded to 9i/10g? - DUDE: "Database Unloading by Data Extraction" www.ora600.org, tool that reads from .dbf files outside of the database. ala jDUL. --- Q: I've lost my undo tablespace (and/or redo logs). How do I recover? A: Follow these steps: Get a copy of your control file, go to the case #2 (reset logs case). log in; use resetlogs control file case to startup nomount. You'll get an error when it tries to mount your undo tablespace like this: "ORA-01565: error in identifying file '/u04/oradata/dw30tst/undotbs2.dbf'" when trying to startup mount. SQL> alter system set undo_management = manual scope=spfile; SQL> shutdown immediate; SQL> startup mount SQL> alter database datafile '/u04/oradata/dw30tst/undotbs2.dbf' offline drop; SQL> alter database open resetlogs; SQL> drop tablespace undotbs2 including contents and datafiles; SQL SQL> alter system set undo_management = auto scope=spfile; --- Q: What are some VLDB backup strategies & best practices? A: Some sources: http://www.oracle.com/technetwork/database/features/availability/vldb-br-128948.pdf http://www.oracle.com/technetwork/database/features/availability/311394-132335.pdf https://docs.oracle.com/cd/E18283_01/server.112/e16541/vldb_backup.htm even with big powerful machine, you are limited in the amount of data you can backup at a time. (Exadata quarter rack can apparently do about 4TB/hour). Some options: 1. Take the time it needs for a Level 0, then depend just on incrementals going forward 2. Incrementally update the backup on disk 2a. Use "Fast Recovery Area" which apparently eliminates need to do full backups? 3. Use data guard to keep a real-time mirror backup 4. Maintain your ETL files and depend on re-running them if needed. 5. Multi-cycle full backups: just run level 0s over several days using rman partial 6. Just do monthly fulls, daily incrementals and admit that a recovery will have to scroll through many many incrementals 7. ZDLRA; basically uses a combination of redo log apply and rman incrementals to maintan a backup image of your large database elsewhere. 8. Use RMAN multi-section backups 9. Use read-only as much as you can =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= RMAN Specific/rman/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= --- Q: What is RMAN? How do you use RMAN? A: Rman is "Recovery Manager." Oracle's backup/restore tool. See the Oracle Backup and Recovery Guide. R. Freeman has a great book out called "RMAN Backup and Recovery" that is considered the bible of RMAN. Introduced in Oracle 8i, it has become more refined with each version and can greatly ease the backup and recovery administration tasks. --- Q: Why use RMAN instead of OS-level tools? A: " RMAN performs compressed backups " RMAN automates your recovery operations. " RMAN allows for point-in-time recovery (as do archive logs, but RMAN makes for a lot easier recovery) " RMAN can automatically clean up archive logs once a successful backup is completed. " RMAN does rudimentary disk checks to ensure no block corruption exists on your backup files. " RMAN automatically checks block status, and prevents a block from being backed up if it is in an inconsistent state " You can duplicate databases with RMAN. " You can create standby databases w/ RMAN. " RMAN can automatically clean obsolete and old backups. " RMAN can store all its metadata in one catalog server for simplistic tracking. --- Q: Are there any drawbacks to RMAN (over conventional/old school backup/recovery mechanisms?) A: The main drawback is: - no object level recovery (like restoring one table from an exported dump) - point in time recovery is very complex to setup, but the payoffs are great. --- Q: How do you know if you have RMAN installed and can use it? A: $ORACLE_HOME/bin/rman Comes default with versions 8i and above... --- Q: What is the value of the catalog server? Why not just run script-based rman backups all the time? A: 2 Primary reason: - without a recovery catalog server, the RMAN scripts cannot be stored in a database schema - using a recovery catalog server ensures that the RMAN backup history information is maintained. this can be a larger concern, as records can get aged out of control files (default parameter control_file_record_keep_time is 7 days) Without a catalog server, you can still report schema, validate backups, etc. However, by using control files exclusively instead of using a catalog server, you limit the amount of rman backup history records (since control files have size/record limits). --- Q: How do I Create the RMAN Recovery Catalog? How do I create the repository? A: (see chapter 16 of the Oracle9i Recovery Manager User's Guide for documentation). (see chapter 10 of the Oracle10g Backup and Recovery Advanced Users's guide) The documents suggest NOT using the same DB that you want to backup as the catalog host ... for obvious reasons. However, the catalog has to reside *somewhere*. Find the most appropriate spot for it. I've generally created a small separate database instance, called it "RMANSVR" and created the rman user and catalog. - Create the rman user, by logging into the host db instance as sys CREATE USER rman IDENTIFIED BY rman TEMPORARY TABLESPACE temp DEFAULT TABLESPACE tools QUOTA UNLIMITED ON tools; GRANT RECOVERY_CATALOG_OWNER TO rman; GRANT create session TO rman; - Create the rman catalog through the rman command: $ rman catalog rman/rman@rmansvr or, from the rman prompt: RMAN> CONNECT CATALOG rman/rman@rmansvr (this will tell you "recovery catalog is not installed") RMAN> create catalog (this will simply return "catalog created" and you're done. Log in as rman, do "select table_name from user_tables" to confirm. You should see 30 tables (in 9i) created under the rman schema (38 in 10g). --- Q: How do you connect to rman (whether it be to the control file-based repository or to a catalog server)? A: If control-file based, then on the server in question (set ORACLE_SID) $ rman RMAN> connect target connected to target database: ORAMYSID (DBID=1234567890) If you're using a catalog server, you have to connect to the server and then the target database. You can do this one of several ways: Set your $ORACLE_SID to be the "target" database that you're registering and run this: $ rman target sys/sys@targetserver catalog rman/rman@stg30dev or $ rman target sys@stg30dev catalog rman@stg30dev if you don't want to show passwords ... you'll be prompted for both pwds. Alternatively you can do this: $ rman catalog rman/rman@stg30dev RMAN> connect target --- Q: How do you register your database within RMAN catalog server? A: After connecting (as above), you'll get an error like this if your db is not registered; RMAN-06004: ORACLE error from recovery catalog database: RMAN-20001: target database not found in recovery catalog Ignore this error: Just type the following: RMAN> register database; This will register the database you've connected to as the target. If you're running this as oracle and are local to the machine with the database, no additional login is necessary, b/c oracle's already allowed to login as sysdba based on its membership in the dba group at the OS level. You can also do this: # rman catalog rman/rman@stg30dev RMAN> connect target sys/sys@targetserver RMAN> register database; Once done, you can test your work by : RMAN> report schema; If you attempt to register a database that's already been registered, you'll get an RMAN-20002 error that says as much (but is harmless). --- Q: What are some common configuration values to startup with RMAN? A: Suggestion: create a file called "rman_server_config.rcv" with the following (change directories as appropriate) (see $ORACLE_HOME/rdbms/demo/*.rcv for demo files that fully explain what these mean). # backups to disk, as opposed to tape (if tape, other options needed) CONFIGURE DEFAULT DEVICE TYPE TO DISK; # keeps at least 5 copies of each file CONFIGURE RETENTION POLICY TO REDUNDANCY 5; # uses two channels (server processes) to write data CONFIGURE DEVICE TYPE DISK PARALLELISM 2; # specify disk directory CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/array/backup/rman/ora_df%t_s%s_s%p'; # backs up the control file too (always a good idea) CONFIGURE CONTROLFILE AUTOBACKUP ON; # tells RMAN where to backup the control files CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/array/backup/rman/ora_cf%F'; # allows for point in time recovery going back 30 days CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS; # tells RMAN to be smart while backing up, and not re-backup files already backedup CONFIGURE BACKUP OPTIMIZATION ON; # select * from TS to see which tablespaces ARE included # tablespaces to exclude: no need to specify temp # Note: ONLY exlucde tablespaces that you KNOW you do not use or which you KNOW you're better # off creating by hand ... never exclude any system tablespaces CONFIGURE EXCLUDE FOR TABLESPACE indx; You can put all these in one file, and then run the file from RMAN like this: RMAN> @rman_stg30dev_config.rcv or $ rman target sys/sys@stg30dev catalog rman/rman@stg30dev cmdfile rman_stg30dev_config.rcv Note: these are configured ONCE for each database, and are stored in the repository going forward. --- Q: how do I change parameters in RMAN: A: configure command. examples: You can change them and RMAN will report old and new values like this: RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK; old RMAN configuration parameters: CONFIGURE DEFAULT DEVICE TYPE TO DISK; new RMAN configuration parameters: CONFIGURE DEFAULT DEVICE TYPE TO DISK; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete You can always look at the rman catalog table "CONF" to see these values. You can always look at the rman catalog table "TS" to see what tablespaces ARE being backed up. If you exclude a tablespace that you now want to INCLUDE, you can run this command: RMAN> configure exclude for tablespace cwmlite clear; If you want to see what is being excluded: RMAN> show exclude (or show all, to get all parameters) You can clear any existing configure parameter and restore it to default by calling the same configure command and put "clear" at the end. --- Q: how can I see all my configuration parameters in RMAN? A: RMAN> show all; to see all the configuration parameters or, sqlplus rman/rman@rmanrepository and SQL> select * from CONF; --- Q: What is a good quick script to do a full cold backup: A: I've got these steps in a file called rman_full_dbshutdown_backup.rcv STARTUP FORCE DBA; SHUTDOWN IMMEDIATE; STARTUP MOUNT; BACKUP INCREMENTAL LEVEL 0 DATABASE FILESPERSET 4; ALTER DATABASE OPEN; --- Q: What is a good quick script to do hot incrementals: A: I've got these steps in a file called rman_incr_weeklyroutine_backup.rcv DELETE BACKUP COMPLETED BEFORE 'SYSDATE-7' DEVICE TYPE DISK; BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DEVICE TYPE DISK DATABASE FILESPERSET 4; BACKUP BACKUPSET ALL; # copies backups from disk to tape BACKUP ARCHIVELOG ALL; DELETE ARCHIVELOG UNTIL TIME 'SYSDATE-7'; And I've got a script called rman_nightly.sh, running at 2am in cron that looks like this; for server in STG30DEV do echo $name rman target sys/sys@${server} catalog rman/rman@rmansvr << EORMAN @rman_incr_weeklyroutine_backup.rcv EORMAN done --- Q: Can I do hotbackups of tablespaces A: yes: - if you have read only tablespaces, you can back them up at any time RMAN> BACKUP TABLESPACE read_only_tablespace_name; --- Q: What is a good list of Maintenance commands within RMAN? A: (these all assume you're logged in, and connected to the target server) RMAN> restore database validate; this checks your database to see if its restorable. Any errors will be shown. Immediately follow with report schema; to figure out which datafile #'s refer to which tablespaces. RMAN> RESTORE TABLESPACE read_only_tablespace_name VALIDATE; same for your read only tablespace. RMAN> RESTORE CONTROLFILE VALIDATE; same for your control files # check if archivelogs for the past two weeks can be restored RMAN> RESTORE ARCHIVELOG FROM TIME 'SYSDATE-14' VALIDATE; RMAN> report schema; this shows all the tablespaces in the target server, whether they're being backed up or not (excludes temporary tablespaces by default, since you'll never back them up). RMAN> report need backup; tells you what needs to be backedup. For example, if you've added a new tablespace. RMAN> crosscheck backup; Verifies that all backups on the backup media are intact. RMAN> delete obsolete; this will purge any and all backups that it knows are no longer needed. It reads the retention parameter (in our example, 30 days) and deletes backups older. It then deletes unneeded archive log files (which are, any archive log files that are older than the datafile backup they support). RMAN> list backup RMAN> list backup summary; these list the backups existant in the RMAN catalog. Q: What is an example of a restore .RCV Script? A: SET DBID 3201665279; CONNECT TARGET STG30DEV; STARTUP NOMOUNT; RUN { # uncomment the SET UNTIL command to restore database to the incremental # backup taken three days ago. # SET UNTIL TIME 'SYSDATE-3'; # set this to be the same as your current backup location and format name CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/array/backup/rman/ora_cf%F'; # restore the control file, then the database. RESTORE CONTROLFILE FROM AUTOBACKUP; ALTER DATABASE MOUNT; RESTORE DATABASE CHECK READONLY; RECOVER DATABASE NOREDO; ALTER DATABASE OPEN RESETLOGS; } --- Q: What is a list of sample recovery scenarios and RMAN commands? A: 1. recover the entire database; database starts but cannot read a datafile (as an example) RMAN> connect target RMAN> restore database; RMAN> recover database; RMAN> alter database open; 2. restore a tablespace sql> alter tablespace X offline immediate; RMAN> restore tablespace X; RMAN> recover tablespace X; sql> alter tablespace X online; (note: you can also run sql from rman like this: RMAN> sql 'select * from dual'; 3. Restore a block that's been corrupted. If you get Ora-01578 errors in the alert log like this " ORA-01578: ORACLE data block corrupted (file # 7, block # 3) ORA-01110: data file 7: '/oracle/oradata/trgt/tools01.dbf' you can recover like this: RMAN> BLOCKRECOVER DATAFILE 7 BLOCK 3 DATAFILE 2 BLOCK 235; --- Q: What are some helpful hints/tricks to working with RMAN? A: 1. alias 'rmanlogin' to 'rman catalog rman/rman@stg30dev' to save typing. 2. Have documented the DBID that goes with each server. The DBID is used during recovery by RMAN and is an internal unique identifier for a database. It can also be handy, since RMAN only knows of the DBID during some of its recovery operations, and its the ONLY way to refer to a backup procedure if you're NOT using a catalog database. You can get the DBID when connecting to a target within RMAN. dbname dbid db_key in rman DW30DEV (DBID=2078697134) 18 STG30DEV (DBID=3201665279) 1 DW20TST (DBID=3312156904) 35 STG20TST (DBID=1618016879) 367 DW30TST (DBID=2692760053) 596 STG30TST (DBID=2237182618) 600 Then, match up these DBIDs to their keys by logging in as rman and sql> select * from db. This will allow you to know what db_keys are what servers within RMAN's catalog. (this data is also located in DBINC table in rman catalog). 3. Save configuration parameters into a file for later use. You can also get the configuration for your database by logging in as rman and sql> select * from conf where db_key=; (If for no other reason than to be able to quickly cut-n-paste them and configure a new server) 4. You can call scripts from RMAN command line all at once like this: $ rman target sys/sys@db1 catalog rman/rman@rmansvr CMDFILE abc.rcv LOG out.log 5. You can store scripts in RMAN like this: RMAN> replace script { ... your script here } --- Q: What are some example scripts for RMAN? A: rman> create script full_backup (or) rman> replace script ts_system_backup { allocate channel c1 type disk; allocate channel c2 type disk; allocate channel c3 type disk; sql 'alter system switch logfile'; resync catalog; backup database plus archivelog delete input tag 'Full Backup'; (or) backup tablespace system format='/fs1/backup/al_%d%t%p'; } This creates an "rman script" called "backup_full" that can be called from a shell script like this: #!/bin/sh rman catalog rman/password@rman target sys/password@systest << EOF run {execute script full_backup;} EOF --- Q: How do you run backups within Rman? A: Example scripts (from various oracle-L/lazydba posts) rman> create script full_backup (or) rman> replace script ts_system_backup { allocate channel c1 type disk; allocate channel c2 type disk; allocate channel c3 type disk; sql 'alter system switch logfile'; resync catalog; backup database plus archivelog delete input tag 'Full Backup'; (or) backup tablespace system format='/fs1/backup/al_%d%t%p'; } This creates an "rman script" called "backup_full" that can be called from a shell script like this: #!/bin/sh rman catalog rman/password@rman target sys/password@systest << EOF run {execute script full_backup;} EOF --- Q: How do you do a recovery using RMAN? A: scenarios: 1. recover an entire database: need more details rman> restore database, open database to mount phase sql> alter system set log_archive_dest_1=/directory/where/log/archives/reside sql> set autorecovery on sql> recover database auto until cancel 2. recover one tablespace; very straight forward sql> alter tablespace X offline immediate; RMAN> restore tablespace X; -- this restores the file from the backup RMAN> recover tablespace X; -- this runs through all redo/archive logs, re-enacting transactions. sql> alter tablespace X online; If you try to online the TS before doing the recover step, you'll get the "media needs recovery" error message in oracle. Like this: SQL> alter tablespace dimensions online * ERROR at line 1: ORA-01113: file 13 needs media recovery ORA-01110: data file 13: '/array/oradata/DW30DEV/dimensions_01.dbf' You can also recover a tablespace until a certain time like this: (note: do NOT combine restore ... with until clause; use the set commands) RMAN> restore tablespace dimensions until time "to_date('07/13/2005 17:00:00','MM/DD/YYYY HH24:MI:SS')"; RMAN> RESTORE tablespace dimensions UNTIL TIME "TO_DATE('07/13/05','MM/DD/YY')"; RMAN> restore tablespace dimensions until time 'sysdate-1'; RMAN> run { set until time "to_date('07/13/2005 17:00:00','MM/DD/YYYY HH24:MI:SS')"; restore tablespace dimensions; recover tablespace dimensions; } 3. recover a control file: ?? (need a scenario test) --- Q: What is the difference between "controlfile autobackup" and "snapshot controlfile" in the RMAN configuration? A: (from oracle-L conversations 2/2/05) - snapshot controlfile: grabs a quick copy of the control file to hold during the duration of the backup (for a consistent view) - controlfile autobackup: gets a conventional backup of the controlfile. --- Q: How do I see a full list of all configuration parameters? A: RMAN> show all; --- Q: Can I disconnect from one target and reconnect to another on the fly? A: No. (but its easy enough to quit and re-login). --- Q: How do I get RMAN to recognize a new tablespace I've just created? A: You don't have to worry about it; all new tablespaces are automatically backed up unless you specifically exclude the file from backup. --- Q: How do I know what files are not being backed up? A: Run these two commands: - restore database validate: any new datafiles will fail validation or will report as missing. - report schema: and note the file numbers missing. --- Q: How can I validate my backup when i'm excluding specific datafiles/tablespaces, since restore database validate will fail in this case? A; You cannot; you'll have to perform individual tablespace validates; RMAN> restore tablespace system validate; RMAN> restore tablespace XDB validate; ... --- Q: How do I unregister a database? A: log into rman catalog as rman, get the db_key for your database: select d.db_key,d.db_id,di.db_name from db d, dbinc di where di.db_key = d.db_key and di.db_name='DW20TST'; SQL> EXECUTE dbms_rcvcat.unregisterdatabase(db_key, db_id) ?? but this didn't work. I can't reregister the database. --- Q: How do I detect any block or database corruption? A: RMAN automatically detects such corruption as it backs up the data. Log in as system to the database in question, and run these two queries: SQL> select * from v$copy_corruption; SQL> select * from v$backup_corruption; --- Q: What is the difference between a full, a whole database and an incremental level 0? A: (why oracle makes this so confusing, i'll never know). Definitions from RMAN manuals: Even with these definitions, waters are still murky. "Full" backup: A backup that is not incremental. A full backup includes all used data blocks in the datafiles. Full backups of control files and archived logs always include all blocks in the files. "Whole" backup: a backup of all datafiles and the current control file. "Incremental Level 0" backup: A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data. The only difference between a level 0 backup and a full backup is that a full backup is never included in an incremental strategy. ??? Still don't believe I have a handle on this. --- Q: How do I get remote servers backed up to my local disks? A: you'll have to mount your disks onto the remote database server, so they appear to be local to the remote database when it performs its backups. All rman commands are passed through to the remote database in question, and run as if you were logged in directly to the machine. This is a limitation in rman in my opinion. --- Q: How do you ever recover from a typo in the control file backup configuration parameter? (i.e., ORA-01580: error creating control backup file /array/backup/rman/DW20TST/snapcf_dw20tst.f and you can't unregister, clear the command, etc?) A: Instead of clearing the varible, just reset it RMAN> configure snapshot controlfile name to '/ehridb1/rman/DW20TST/snapcf_DW20TST.f'; --- Q: I keep getting "waiting for snapshot controlfile enqueue" messages while trying to do RMAN operations. How do I fix this? A: some session (probably a failed rman session) is holding the control file. You'll have to kill the session. Though, one situation was resolved by switching the archive log file on a *different* server on the same machine. why why? Very odd situation. --- Q: How do I duplicate/clone a database in RMAN? A: there's an entire chapter in the Recovery manager docs dedicated to duplicating databases. Metalink notes: Note:73974.1 - RMAN: Restoring an RMAN Backup to Another Node Note:228257.1 - RMAN Duplicate Database in Oracle9i 259694.1: same for 10g Basic steps: 1. Copy the DB FULL RMAN Backup Sets from source server to target server. 2. Perform the following steps from target - $ rman target / RMAN> startup nomount pfile=/u01/app/oracle/product/10.2.0/dbs/initORAIMPL.ora RMAN> restore controlfile from '/u04/backups/ORAIMPL/rman/c-3687490407-20071215-00_ORAIMPL'; RMAN> alter database mount; 3. -- Run this list command on source to find out most recent sequence #. RMAN> list backup of archivelog all; 4. - Run rman script: RMAN> run { set until sequence 4634; allocate channel ch1 type disk; allocate channel ch2 type disk; allocate channel ch3 type disk; restore database; recover database; alter database open resetlogs; } --- Q: what is the rman syntax for a point-in-time recovery? A: rman> RECOVER DATABASE UNTIL TIME '1998-11-23:12:47:30' --- Q: What does the following mean and how do I fix it? RMAN-06207: WARNING: 5 objects could not be deleted for DISK channel(s) due RMAN-06208: to mismatched status. Use CROSSCHECK command to fix status A: Rman has gotten confused about the status of certain files that have been requested to be deleted. Run RMAN> crosscheck backup; to fix. --- Q: How do I include a TS that I previously excluded? A: RMAN> CONFIGURE EXCLUDE FOR TABLESPACE 'CWMLITE' clear; --- Q: How long does it take to backup objects? To recover objects? A: completely database dependent; each site needs their own benchmarks. --- Q: What is the compression potential of Rman backup files? (as in, if I gzip or zip or compress a rman file, how much space will I save? A: Typically between 85-95% space savings, depending on how much data is in the file. case 1: 301103813 gz -> 1991819264 uncompressed (85% compression rate) case 2: 1828298 gz -> 46039040 uncompressed (96% compression rate) --- Q: Can you use environment variables in .RCV scripts? A: Officially, no. You may get errors like this in your output: ORA-07217: 'sltln: environment variable cannot be evaluated.' However, some success has been seen passing variables from a backup script to the RCV script and having them properly evaluated. Not ORACLE_SID though. Remember to export the variables after assignment else the rcv script won't get them. Best to not tempt fate and just don't use $vars being passed in by calling scripts. --- Q: How do you put comments into an RCV script? A: "#" character is the comment character, not "--" or "/* enclosed */" like with other Oracle scripts. --- Q: What do the various "%" parameters mean in .rcv files? A: Are these related to the filename formats available to archive logs?? %F: prints out the DBID, the date and a sequence number "c-3687490407-20070530-05" %s: a sequence number 125 %p: piece number? (only prints out #1 for me) %t: ? (example: 624030459 %d: ? --- Q: Why do rman backup repositories constantly need to be crosschecked? A: Because files often times get moved or aged off of disk/tape repositories outside of RMAN. whenever rman can't find a file that it expects, it requires a crosscheck to be run. Syntax: RMAN> crosscheck backup; RMAN> crosscheck archivelog all; --- Q: I'm connected to my repository and connect target newDB and get: PL/SQL package SYS.DBMS_BACKUP_RESTORE version 10.02.00.00 in TARGET database is not current PL/SQL package SYS.DBMS_RCVMAN version 10.02.00.00 in TARGET database is not current How do I fix this? A: This error is apparently indicative of a version incompatibility between the attempted catalog server and the database. Example error above seen where catalog server was 10.2.0.3 on linux and target database to register was 10.2.0.1 on windows. Odd; shouldn't have occurred. --- Q: I have recovery window set to 3 days but my 7-day old full backup won't get dropped once I get a new full. Why? A: Its probable that records have aged out of your control-file based rman catalog and thus can't be reported on. Example: level 0 backups weekly, level 1 backups nightly. Database parameter "control_file_record_keep_time" set to its default value of 7 days. So, when the next full backup is done, its past 7 days and the records of the previous full back have aged out. thus delete obsolete can't find the previous full backups and they have to be deleted manually at the OS level. Furthermore, once you delete any file at OS level, you have to re-run cross checks to clean up the rman catalog. Solutions: - use catalog server, which has no such age limitations - alter system set control_file_record_keep_time=14 scope=both; --- Q: I get the message "new incarnation of database registered in recovery catalog" when I connect target my database. What does this mean? A: This means your database was opened sometime in the past with "resetlogs" and thus a new database incarnation was set. OR, it means you're logging into a newly cloned version of the database and the dbid has changed versus what the stored version is. -- Q: How can I get a list of all the scripts in my rman catalog? A: RMAN> list script names; --- Q: What is a good cleanup rman backup script to run? A: connect target / configure retention policy to redundancy 7; run { crosscheck backup; crosscheck archivelog all; delete obsolete; delete expired backup; delete expired archivelog all; } --- Q: How do you get rman to do a "no prompt" or automatically delete expired backups and expired archive logs without prompting yes to the question, "Do you really want to delete these files?" Need this to be able A: use delete noprompt! --- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Security/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Q: What is a good way to get random numbers of data A: This isn't terribly secure necessarily, but one can generate random numbers by using the dbms_random procedures. Seed the random generator, then use a substr to get the number of digits wanted. Here's an example to come up with random 9 digit numbers for SSN fields: SQL> exec dbms_random.seed(5); UPDATE prsn SET ssn=SUBSTR((10000000000000*dbms_random.VALUE),0,9); --- Q: How can I password-protect data A: dbms_obfuscation_toolkit: --- Q: Why is "grant select any table" dangerous? A: Because it literally allows any user to select from any table, including sys and system tables, and would allow a disingenuous person to read from password and audit tables, any application tables and any fields within any app tables that may contain sensitive or private information (ssn, salary, face data, etc). --- Q: Can you do schema-wide grants? Can you say "grant select on userA.* to userB?" like in other databases like mysql? A: No there is no grant select on userA.* to userB; there is no grant select on any table in schema userA to userB; You'll have to scroll through the tables and grant selects. Code like this: select 'grant select on userA.' || table_name || ' to userB;' from dba_tables where table_owner='USERA' order by table_name; --- Q: Are the "CPU" patches (Critical Patch Update) cumulative, or do you have to load up the previous quarter's patches to install a new one? A: They are cumulative; if you skip a quarter and install the subsequent one, you should still be fine. Some exceptions exist for some of the more escoteric products. http://www.oracle.com/technology/deploy/security/alerts.htm this link has ALL the CPUs going back to Jan 2005. this is a great resource for looking at CPU patch release notes. Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1) --- Q: Do CPUs generally force you to catch up your database to the latest revision? A: yes, especially for older products. For example: Jan2009 CPU will only install on these versions of the database: 9i: 9.2.0.8 10g r1: 10.1.0.5 10g r2: 10.2.0.2 - 10.2.0.4 11g: 11.1.0.6 In our case, lots of people were still on 9.2.0.7 or 10.1.0.4 --- Q: How do I get a list of all users that have a particular role granted to them? How do I see what role all the users have? Who has DBA role on my server? Who has what roles on my server? A: select * from dba_role_privs where granted_role='DBA'; --- Q: What permissions are granted with the typically used "startup" roles? A: - connect: prior to 10gR2, had create and alter session, create cluster, dblink, sequence, synonym, table and view. - connect: as of 10gr2, create session only (used to have other roles associated with it and was viewd as a security hole). Depricated. - resource: comes with a series of create object privs (cluster, indextype, operator, proc, sequence, table, trigger, type). Exists in 10g for backwards compatibility. - create session: as it sounds; allows a user to create a session. Without this role, a user can't log in. --- Q: What *should* I grant to new users? A: As of 10g: just create session for beginners; connect and resource are obsolete and give too many permissions to connect-only users. --- Q: Can a role be granted quotas on tablespaces? Can you do this: SQL> alter role role_name quota 75m on users; A: No, there is no quota on a user. quota must be individually assigned by user. ugh. --- Q: What are the various account status's in dba_users and what do they mean? A: - EXPIRED & LOCKED: default status dbca installer leaves the various system accounts for security purposes. - LOCKED: account manually locked w/ alter user. - OPEN: normal account w/ no issues - LOCKED(TIMED): account that has been locked by too many invalid logins --- Q: How many times can a user attempt to log in before locking their account? How do you change this behavior? A: oracle 10g by default locks user accounts (sets status to "LOCKED(TIMED)") after TEN (10) incorrect logins. After this 10th failed login, the dba must issue alter user X account unlock; How do you fix this? Modify the default user profile, set the value for the FAILED_LOGIN_ATTEMPTS setting to a value other than 10. You can set "UNLIMITED" but this will result in a security hole. Setting too low can result in normal users getting locked out. alter profile default limit FAILED_LOGIN_ATTEMPTS 15; alter profile default limit FAILED_LOGIN_ATTEMPTS UNLIMITED; --- Q: How do you see the default user profile if none is created? A: select * from dba_profiles where profile='DEFAULT'; --- Q: how do I create my own profile? A: create a profile, assign limits to it, then alter user and give the profile. --- Q: How do you prevent a user from logging in during business hours? A: There might be other ways; but this logon trigger works great: CREATE OR REPLACE TRIGGER logon_trigger AFTER LOGON ON DATABASE BEGIN -- prevent DRES_RO user from logging in from 9am to 5pm if (user = 'DRES_RO') then IF (to_number(to_char(sysdate,'HH24'))>= 9) and (to_number(to_char(sysdate,'HH24')) <= 17) THEN RAISE_APPLICATION_ERROR(-20005,'DRES_RO Logon only allowed outside business hours'); END IF; end if; END; / --- Q: What are some good security benchmarks/industry standard tools used to secure databases? A: 2 main ones: - Disa's Security Readiness Review (SRR) - CIS's Benchmarks DISA Field Security Operations Database Security Readiness Review is the main list. http://iase.disa.mil/stigs/checklist/ DISA.MIL's Security Readiness Review Evaluation Scripts http://iase.disa.mil/stigs/SRR/index.html NIST.gov's Database Security Technical Implementation Guide http://iase.disa.mil/stigs/SRR/index.html Another well known checklist is the Center for Internet Security checklist. http://cisecurity.org/en-us/?route=default Download the best practices benchmark xls for database. NGS Squirrel: commercial solution recommended by Wali Ali. --- Q: how do you tell if any objects in your database are encrypted? A: http://docs.oracle.com/cloud/latest/db121/TDPSG/tdpsg_encryption.htm#TDPSG40433 select * from dba_tablespaces where encrypted='YES'; SELECT * FROM DBA_ENCRYPTED_COLUMNS; =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Replication/Standby/Data Guard Operations/Data Guard/Active Data Guard/ =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Q: What are operations you should never do in a Standby-Database environment, if you'd like to maintain the integrity of your data? A: (from Oracle-L conversation 12/7/04) - use NOLOGGING as an option during DML - issueing alter system CLEAR ARCHIVE - RECREATE CONTROL FILE - RESETLOGS - ARCHIVE LOG STOP - Generally, any non-logged operations should probably be avoided. Unrecoverable, insert /*append*/, etc. Use "force logging" as a system-wide operation. - use of truncate table; perhaps obsolete bug - If add a new datafile to primary, you must add it to standby or recovery ceases - possibles (but not proven): moving archive log locations, renaming database - If you increase f/s space on primary, you'll have to match the change on the backup - Recommendation: Use DataGuard. --- Q: What is the difference between "redo apply" and "real-time apply" in Data Guard? A: 2 options for Physical data guard: - redo apply: default setting, applies changes from standby redo logs after they are archived off. - Real-time apply: Log apply services apply redo as soon as it is received, rather than waiting for archival. Pro/Con: - realtime apply: faster switchover, faster failover, - redo apply: less load on the source server. - Sql apply: logical standby only, reconstitutes sql from redo logs and then applies that sql on the logical failover. This allows for the replicated tables to be read-only and to be kept uptodate with the primary, but for the rest of the database to be kept in read-write mode and function normally. --- Q: What are some common commands used to maintain/administer a physical standby database? A: from Chapter 8 of the Data Guards Concepts & Administration: A physical standby server works in perpetual recovery mode, constantly reading and "recovering" archive logs that are shipped to it from the primary. You cannot directly log into a physical standby server as a normal user (only as sys) and even sys cannot query certain objects (anything based on non-fixed objects returns: ORA-01219: database not open: queries allowed on fixed tables/views only o So, to log into a standby database: c:> set oracle_sid=dcsappp1 c:> sqlplus SQL> sys as sysdba o To restart a standby database using "real-time apply": shutdown immediate startup nomount alter database mount standby database; ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE PARALLEL 2 DISCONNECT FROM SESSION; o to confirm that recovery is going on, issue this command on the primary database: select * from v$archive_dest_status; and look at the "recovery_mode" if using "redo apply": MANAGED_RECOVERY if using "real-time apply" MANAGED REAL TIME APPLY o to confirm that the standby server is running, issue this: SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY; PROCESS STATUS --------- ------------ ARCH CLOSING ARCH CLOSING MRP0 APPLYING_LOG RFS IDLE RFS IDLE ... If the MRP0 or MRP process exists, then the standby database is applying redo. o To open Read only standby database ALTER DATABASE RECOVER MANAGED STANDBY DATABASE cancel; ALTER DATABASE open; then, To bring it back in recovery mode from Read only mode, use the last alter database statement above to re-alter the database and get it to catch back ukp. ---- Q: What are some of the platform limitations for configuring Physical Standby? Q: Can I configure data guard from platform A to platform B (i.e., Windows to Linux?) Q: Is Data guard platform independent? Data Guard compatibility matrix? A: Generally speaking, your primary and standby servers must be: o the same operating system flavor (windows->windows, linux->linux) o The same version of Oracle (10g->10g) Can I do 32bit->64bit? Yes, but it is awful hard to convince oracle Tech support. In 11g, a lot of these restrictions are lifted between Windows and Linux platforms; most other platforms remain like os->like os. see note: Data Guard Support for Heterogeneous Primary and Standby Systems in Same Data Guard Configuration [ID 413484.1] In this note, there is a great matrix of from->to environments that are supported. --- Q: Can you configure Oracle to do automated database failover? A: Yes; configuring Data Guard to do "Fast-Start Failover" using the Data Guard broker. http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_FastStartFailoverBestPractices.pdf --- Q: How do I diagnose archive log gaps in my Data Guard configuration? A: run select * from v$archive_gap for simple information run this sql on both the main and the failover: SELECT THREAD#,MAX(SEQUENCE#) FROM V$LOG_HISTORY WHERE RESETLOGS_CHANGE# in (SELECT RESETLOGS_CHANGE# FROM V$DATABASE_INCARNATION WHERE STATUS = 'CURRENT') GROUP BY THREAD# (to log into the failover, you'll have to connect to the unix server as oracle and log in sqlplus / as sysdba because it is in persistent failover mode and cannot be connected to with normal users). Why would there be data guard gaps? Network latency, space issues on either the primary or the failover, unavailablity of the failover database, a data guard broker configuration issue, etc. Oracle provided two useful scripts: dg_prim_diag.sql and dg_phy_stby_diag.sql that are available at support.oracle.com: Script to Collect Data Guard Primary Site Diagnostic Information for Version 10g and Above (Including RAC). (Doc ID 1577401.1) (previous/old 9i versions at "Script to Collect Data Guard Primary Site Diagnostic Information for Version 9i (Doc ID 241374.1) --- Q: What is Active Data Guard? How does it differ from regular Data Guard? A: Active Data Guard allows for physical failover instances to be available (mounted) in read-only but full access mode. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Administrative/Management Issues =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= --- Q: What are some good maintenance activities to do on a regular basis? A: adapted initially from a comp.database.oracle.server post by Leonard F. Clark (lfc@zoom.co.uk) 12/6/00, added in some Oracle-L comments, information gleaned from a nice white paper by Thomas Cox and Christine Choi called "Oracle DBA Checklist." Daily - Check Oracle processes (ps -fe | grep oracle) - Check listener process as well: netstat -a | grep oracle - Check alert log: check for errors today. Tail -f or vi the file and search for "ORA-" - Check alert log at each reboot for any boot errors - Check that no tables are approaching their max extents (if max extents are used) - Check that free space in each tablespace is 90% or less - Check that next 5 extents of free space are large enough for the next 5 largest "next extent sizes" in each tablespace - Look for long running jobs, make sure they're legit and not runaways. - Look for orphaned/zombie oracle processes (ppid of 1 instead of ppid of oracle startup). - Check v$lock for long running locks, active blocking processes - Study high impact tables closely, looking for unexpected growth - Verify last night's backups worked - Verify rollback segments are online (select * from v$log) - Analyze statistics for appropriate tables Weekly - Full schema analyze statistics, including indexes - look for excessive "chaining" of tables, row migration, looking for reorganization candidates - Check growth trend analysis, look for fast growing tables. - Coalesce indexes, tables, tablespaces for space optimization - Analyze object validate structure all tables, indexes - Look for invalid packages, triggers, stored procedures, stored pl/sql code. - Look at all newly created objects looking for breakdowns in naming schemes, storage clauses, column names, etc. - Archive alert logs, backup logs Monthly/Adhoc/As needed - Run statspack regularly, store output for trend analysis. Create a dbms_job that collects stats on an hourly or routine basis, run reports and then clean out the statspack tables. - Check sar outputs for i/o, memory, page swapping, etc. Use the monitoring script /export/home/idea/perf_monitor.sh as an example. - Check hit ratios; check to see if keep, recycle, default pools are working as desired - Check wait stats: run spreport.sql over a long timeframe of statspack output. - analyze worst-performing sql statements, looking for P&T hits, changes in the optimizer's query plan (See Appendix A for diagnosis system queries) - check growth trend analysis spreadsheets for the month, looking for longer term trends. - Check memory management statistics out of statspack. Look for memory contention, cache hit rates, lock contention, insert contentions, buffer busy waits, etc. - Analyze statistics for appropriate tables Consider using Oracle Enterprise Manager (OEM) alerts/events. --- Q: What does a development DBA typically do? A: - Advising on database architecture and the technical aspects behind database creation - Data Modeling, database design and ERD diagram support - Metadata and Data Dictionary creation and maintenance - Development and inclusion of naming and datatype standards - PL/SQL development and tuning - Developer and End-user support; consulting with developers and end users to best access the database. - Developing ETL-type programs to perform data loading - Reviewing developer object creation scripts, pl/sql packages - Designing security and roles for developers and end-users - Creating and maintaining database change control and configuration management policies for database objects - Migration of objects (tables, schemas, stored procedures, etc) from one environment to another (development, test, quality assurance/user acceptance testing, and production) - Performance Monitoring and tuning at a database level - Designing backup and recovery strategies (to include Disaster Recovery strategies). Q: What does an Operations DBA typically do? A: - Day to day database administrative support: adding users, creating schemas, administering roles, extending tablespaces, moving database objects to different filesystems, etc. - Deployment of objects to production; subject to the conditions of change control - Performing/Managing backup and recovery operations. This can include backups that occur on a recurring basis (nightly and weekly backups) as well as ad-hoc backups done to support ongoing operations. - Test and implement Disaster Recovery procedures on the database. - Database Monitoring; continually monitoring the database for space usage, performance concerns, runaway processes, full transaction logs, and "ORA-" errors written to the errorlog that may need attention. - Troubleshooting problems within the database. This can include setting up test cases to try to replicate the problem, performing research on the issue online (Metalink, AskTom, Oracle-L user group archives, Newsgroups, etc). - Working with Oracle Tech Support as needed: logging SRs (formerly known as TARs), working with Oracle technicians and researching issues within Metalink's vast repository of notes and case histories. - Patching; applying patches as needed, upgrading servers and maintaining the Oracle Inventory. This includes installing quarterly security patches as they are released. - Monitoring Database performance: running long-term snapshots of system performance, researching specific troublespots within the database, analyzing patterns of report usage and recommending configuration changes. - Security and Auditing: periodically running security checks to ensure no unauthorized access to objects has occurred. Implementing and monitoring database auditing. --- Q: What is another viewpoint of the breakdown of responsibilityes between System and App DBAs? A: from DC gov conversations: ServerOps DBA - oracle software, installation of $ORACLE_HOME for each app - backup, recovery (rman): arguable per Wang, who wants to do his own backups and not use a common rman catalog server - space capacity planning (with input from App dbas) - Allocation of a "share" of RAM on box to each database - Monitoring: global system issues - Create and manage ASM (the central/shared database space oracle "file system") - a DBA-capable login to each database "just in case" (frankly not needed b/c oracle unix user can log into whatever database as sysdba from the OS level). Can this be "dbsnmp" account? ServerOps DBAs need to be able to log into individual instance grids. Shared sysman account? Wang argues against this as well. Application DBAs - dbca, with input from serverops for whatever naming standards they specify and how much pga/sga you have to work with. - everything post DB creation intra-instance. users, tablespaces, tables, data. - Instance Monitoring and Tuning. - code tuning specific to each application. - tactical backups? (exports) - patching, since each app would have its own $ORACLE_HOME. One-offs, CPUs, upgrades 10.2.0.5 - sudo on unix box to oracle or root (to run root.sh script) Shared - the OEM "grid" since there's only one grid. ServerOps DBAs use to do overall grid mgt, but App Dbas use it to do instance management, do performance snapshots, even admin. - Overnight/Severity one Outages: ServerOps monitoring catches but if possible work with Apps DBAs to bring back online. - ownership of "oracle" unix account. we need to create individual accounts for all app dbas and sudo into the oracle user for tracking and auditing. --- Q: What is a good list of items to put into an Operations Manual written to support an Oracle database? A: Major Chapters: General Information: - high-level system overview, business purpose - authorized users/obtaining access to systems - Security, Privacy Act statements - organization of the manual Site Profiles - physical addresses of responsible parties, machine locations, colocation sites - POC data for key personnel, vendor contacts (oracle, 3rd party vendors) - Staff roles and responsibilities/Core support persons contact information - Owners of the data (who to go to with source issues) - Critical end users/stakeholders and their main points of concern (hotbuttons) License agreements; for all 3rd party vendors list * contract numbers; CSI number for Oracle * list of licensed products * phone numbers/websites for support * expiration date of support * internal coordinator of support if necessary System Operations - list of major components of system (table format w/ version #s) - application inventory; list of major apps using system - software inventory (table with component, software, ver #) - Operations inventory; list of operational system used to support system - Processing overview: system restrictions, SLAs on uptime, supported hours, interfaces w/ other systems, interdependencies w/ other systems - End user support: help desk contact email/phone, procedures, escalation procedures System Administration - Accounts: obtaining an account, security procedures for IDs/pwds, process for assigning temporary passwords - Directory structures; mounts and purposes - System Inventory: table of machines with machine role, vendor, model, serial #, os, kernel patch/Service pack level, ram, #cpus, other config notes - System Maintenance; OS and Application patching procedures, maintenance windows, system-level auditing and logging - Network Maintenance: lan design, any pertinent information - software Configurations: broken down by application. Tables of core configuration files Database Administration - list of users/schemas w/ purpose - list of roles and purpose - Starting/stopping scripts/procedures - Data Load procedures - Data Backout procedures - Data retention/data archiving strategy - Ongoing DBA Tasks * adding users, granting permissions * Materialized view maintenance * Reporting and running reports * running backups: see backup section * performance and tuning: gathering stats nightly/weekly * monitoring error logs; looking for major and minor errors * index maintenance tasks: looking for unused or missing indexes * system performance monitoring; proactively running system-wide performance snapshots, analyzing frequently run sql, Backup & Recovery (system level and database) - Backup Procedures; high level overview of backup, explanation of backup procedures, list of resources being backedup - backup locations/disk structures - table with machines, database, policies, start times, estimated run times - monitoring/maintenance details; checking backup logs for errors, confirming disk space exists, archiving off old backups if necessary - Recovery scenarios and procedures for recovery Disaster Recovery (system and database) - recovery scenarios and procedures for recovery Application Maintenance - application-level user admin in front end apps, portals, ldap, etc - application administration details; starting/stopping services, monitoring, troubleshooting Change Control and/or Change management procedures Configuration Management details, if they exist. - include documented steps taken during installation of databses and applications Appendices: referenced materials/sites, acroynms used, system diagrams. For each technical process, include explanation for each of: - How the process works - Monitoring details - Data Validation - Script Names/Configuration Management locations - Estimated run times - Expected Results - (4) 9/16/04: possible things to add to an Ops Manual (per disc on oracle-l) for an Oracle database: o An overview of the storage subsystem like SANs or anything. o Logins. OS and Oracle and others. o Any tricks he did to improve the application performance because the developer/vendor won't/can't change their code. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Data Warehousing/Data Warehouse Specific =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= --- Q: What are some commonly implemented Data Warehousing tricks? DW Best practices/DW 101/Data Warehousing 101 A: - star_transformation_enabled: this allows the server to execute queries using a star transformation if the indexes are set correctly - query_rewrite_enabled: this allows the optimizer to analyze and rewrite queries to use star transformations instead of conventional database joins. - query_rewrite_integrity: tune this option (advanced) - grant query rewrite to users: consider granting this role to connect to make sure all users have access - Separate staging server configured for OLTP. Still nologging. No real benefit by making 2k versus standard 8k blocksize. - 32k block size on data warehouse server. - nologging on all objects; tables, indexes, tablespaces. - very large extents - Parallel processing; set - pctfree=1, pctused=99 in extents, since only inserts are being done. - Mass partitioning of fact tables based on load key/load date value. Backups done by taking partitions of data offline and copying to backup medium. - - Subpartitions between load-keys or load-dates, for performance. Subpartition using hash and let oracle spread rows across I/O devices - Know your hardware: go to great lengths to separate I/O from each other. Use all possible I/O devices in conjunction w/ Parallel operations. - Local bitmap indexes on all FK and low-cardinality fields. - Heavy use of indexes in general - Alter table exchange partition to load data efficiently into bitmap-indexed tables. - Creation of Dimension objects, which are meta data objects that describe how dimensions of data releate to each other, used by Query Rewrite engine to resolve queries using star transformations. - heavy use of Materialized Views, to do pre-aggregation of commonly done queries. Aggregation tables that are filled by the etl process. (see RI constraint specification). - use materialized view logs to make them fast refreshable. - DWs usually require PQO, which uses direct reads, which do NOT use buffer cache. - DWs won't benefit much from MTS, thus large pool sizes aren't useful. - hash_area_size, sort_area_size should be increased to 64m/process - Create Data Marts, subsets of the whole Data Warehouse to service focused queries. - Create an ODS, a relational data model representing the current data (for reporting on current data, to keep oltp-type queries out of the DW). - Transportable tablespaces, by load key partition or by month/date, allowing all work to be done on staging. - Bind dimension tables/lookups to KEEP pool, Facts to RECYCLE pool. - Create all RI with "rely novalidate" options. This is done for 2 reasons; * it allows an RI constraint to be created without checking all the existing rows (the "novalidate" part). This is important b/c checking FKs can be a huge performance hit. * allows query rewrite to be done in materialized views (the "rely" part) Major Caveats: - don't want to have global indexes if using partitioning. Specify all indexes local, otherwise you cannot do the Transportable tablespace switch. - Can't have unique indexes, b/c they force global indexes - novalidate RI constraints do not check any of the records for actual RI errors; you'll have to write exception reports to do this (which is ok, because they'll be faster anyway) --- Q: What specifically do you have to setup in your database in order to have Star Transformations actually occur? A: - server parameter star_transformation_enabled set to true (you can do an alter session on this as well) - all foreign keys must have bitmap indexes on them - FKs do NOT have to be defined necessarily - the query doing the star transformation must include at least 2 dimensions - using sysdate in the query hits a bug in certain versions which precludes the transformation from occuring --- Q: What is a fact table? What is a dimension table? What are definitions of certain types of table objects you'll find in data warehouses? A: ?? need more detail - Fact: a table that contains measures and event-driven data meant to be summarized and analyzed. A fact itself is a numerical or textual observation of the marketplace. - Dimension; a lookup or reference table used to limit - Aggregate: pre-collected and pre-joined tables of fact and dimension data, designed to pre-answer frequently asked questions. - "Factless Fact" (actually a measureless fact): Fact tables that end up containing nothing but FKs to dimensions. Examples generally describe Events and Coverage. Example: - Degenerative Dimensions: - Conformed dimension: --- Q: What is a type 1 dimension? What is a type 2 dimension? What is the difference? A: type 1 dimensions are reference tables for which the client does not want or need to track historical changes. Simple examples might be gender or state codes. type 2 dimensions are tables where the values of codes have changed over time and must be tracked for consistent reporting. Person last names (as people marry or change names) or other codes that get re-used are good examples. type 3 dimensions create an "old" field in the dimension record to store the immediate past value. This is rarely used, but makes sense in cases where the dimension tracks a situation where both the old and new need to be available for a period of time. Example includes a sales region that changed boundaries but has a transitionary period. Often called a "soft" change dimension. --- Q: What are the different kinds of fact tables? A: * Additive: Additive facts are facts that can be summed up through all of the dimensions in the fact table. * Semi-Additive: Semi-additive facts are facts that can be summed up for some of the dimensions in the fact table, but not the others. * Non-Additive: Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table. Or, taking another grain, all fact tables are either: - Cumulative: Records describe what has happened over a period of time; contains additive values - Snapshot: Always has a date; includes the static state of things over period of times. --- Q: Do you need a PK on a fact table? A: theoretically no, but they can be useful when it comes to troubleshooting data. --- Q: What are typical ways that Data Warehousing implementations fail? A: Here's some reasons I've seen data warehouse implementations fail I'm guessing you want specifics related to your industry; if so this answer won't help you as I've never done a warehouse specific to "consumer finance analytics." 1. Poor data model: If you let a transactional modeler be your dimensional modeller, you'll end up with fact-table joins and snowflaking and absolute join nightmares. 2. Snowflaking in star-schemas; i've yet to see a front end BI tool that can properly use snowflaked joins. Instead of doing star transformations or merge joins, the engines end up doing nested-loops and then hash joins and lose all the efficiency of the dimensional models. 3. ETL architecture: hire a senior ETL architect to at least setup your data flow, overall ETL strategy and ETL tool architecture. If you think you can hire and train an existant senior staff member you'll end up with a rudimentary implementation that doesn't work. 4. Thinking you can do world class ETL operations without a COTS tool: I've seen a sql-loader routine with embedded lookup stored procedures take weeks to load a 250M data set. I've then watched a senior Informatica developer write a load process that did the same operation in about 18 hours. 5. Make sure your BI developers know what they're doing, don't work in a vacuum, work with the data model/DBA and ensure they utilize the partitioning of the fact tables. If you try to query against billion-row fact tables and DON'T use the partitioning, you'll never get the performance you need to keep end users happy. 6. Utilize data marts when appropriate; following up on #5, there will be times where you need to do cross-partition queries quickly. Instead of trying to force bad queries onto a database, write an additional ETL process and create one-off data marts that are catered to your needs. 7. Properly configure your database and make use of all the tricks available. I'm most familiar with Oracle, so make sure you're using star transformations, parallel access, partitioning, bitmap indexes, 32k block sizes, query rewrite, transportable tablespaces, alter table exchange partition, lots of nologging on your objects, keep and recycle pools, rely novalidate on your FK constraints and materialized views to cover queries. 8. Plan ahead for backup/recovery and disaster recovery. Procure enough disk space to do online disk backups OR make sure you make your customers aware of recovery time constraints. Investigate Netapp filers or online SAN and use it. 9. Customer acceptance: if end users can't or won't use it, your warehouse will be a failure. This can be resultant of perception problems, bad training, bad performance or bad management. --- Q: Why are nulls bad in Data Warehouse fact tables? A: Two primary reasons: - summation/average issues: the average of 5,null,6 and 7 should be 6, not 4.5. (is this right? tested and null values did not affect the average). - joining issues: joining FKs on null can be problematic. =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Exadata/Exadata specific =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= *** update this section from the wiki page https://wiki.collegeboardnewmedia.org/pages/viewpage.action?pageId=35888289 Q: What is Exadata? A: Exadata is an intelligently engineered, pre-configured database solution from Oracle that provides "extreme performance" out of the box. Oracle provides an all-inclusive vertical stack of technologies (O/S, network, Hardware and database) under one Vendor flag, engineered to work together and supported under Oracle's Platinum support model (which includes remote monitoring and Oracle-initiated patching). --- Q: what are some of the features that distinguish Exadata and make it perform so well? A: - 10x improvements (or better) for mixed workload (OLTP and Engineering) database use (see Exadata Performance Results for CB's documented quantitative performance improvement figures obtained during Exadata Performance testing). - Fully functional Oracle database (11g or 12c) with industry leading features addressing High Availability, Clustering, ASM, Exadata-specific plug-ins to Grid Control management, and (if using 12c features) In-Memory capabilities and multi-tenant database resource management. - Onboard dedicated storage engines that improve upon network attached storage engines by 10-fold, and which enable reporting from transactional tables without the need for additional ETL. - Hybrid columnar data compression using Oracle's advanced database compression technology (HCC) that reduces disk storage while improving DML and read performance. The storage engines work with the data in compressed format, eliminating the need to uncompress data to query/manipulate. - Heavy usage of onboard flash cache to enable performance against high latency objects. - Onboard dedicated 40gb/second Infiniband direct network connectivity (i.e., not NFS) between the database engine and the storage engines, eliminating I/O waits and delays. - Large amounts of conventional RAM, large numbers of CPUs and huge onboard data storage capacity to enable the consolidation of multiple existing servers onto one Exadata machine (36 cores and 256gb of ram standard per DB node server) - High-availablility and redundancy of all hardware components within the solution at the hardware, preventing any single point of hardware failure within the solution. - RAC and double or triple mirroring at the ASM/storage layer preventing any database-level single point of failure and enabling zero-downtime for patching and maintenance via rolling patching events. - Utilization of RAC coupled with inifiniband network architecture enables mass scalability of hardware to include multiple racks of DB node servers all appearing as one database solution and sharing access to multiple storage servers. - Platinum support from Oracle included with any Engineered solution purchase, which includes remote monitoring, automatic escalation of Sev #1s with 5, 15 and 30 minute SLAs, and free remote quarterly patching and upgrading of the database. --- Q: How do you tell what "version" of Exadata you are on? A: Exadata as a solution includes multiple components that have their own versions, but the Exadata solution as a whole also has a "version" that is tracked by Oracle. Here's how to get the versions of the various main components: Exadata "version": imageinfo (as root) on the db compute nodes and see "Image version" O/S: uname -a (also in imageinfo) Oracle database version: v$version for main version and opatch lsinventory for patches on top of base. Cell Servers: imageinfo Switches, and firmware versions: log into the ILOM of the various components and hit system information. see this link for more. http://www.dbas-oracle.com/2013/04/How-to-check-Exadata-Image-Version-Grid-Software-version-and-Database-Software-Version-in-Oracle.html --- Q: What are some key Exadata management best practices? A: - see Oracle Exadata Best Practices (Doc ID 757552.1) - Run exachk regularly, establish a "health score" baseline and try to keep your system at that baseline - Read Exadata Critical Issues (Doc ID 1270094.1) regularly. --- Q: What are some critical Exadata Doc Ids and links: A: - Oracle Exadata Best Practices (Doc ID 757552.1) - Exadata Critical Issues (Doc ID 1270094.1) - Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1) (more: see support.oracle.com bookmarks) --- Q: What is OEDA A: Oracle Exadata Deployment Application, a java based application that lets you completely configure the Exadata prior to Oracle Installation servies (ACS) coming onsite. Generates XML that is transportable, editable and can be read back into the tool. Installs on your desktop. downloadable from: http://www.oracle.com/technetwork/database/exadata/oeda-download-2076737.html --- Q: What are some key "gotchas" that can come back to haunt you while doing OEDA? A: Based on our CB installation, here were the irreversable issues we ran into while doing OEDA: - UID/GID of Oracle and DBA groups - $ORACLE_HOME location - admin versus client hostnames and connectivity to cloud control/OEM server - 80/20 data/reco ASM configuration - Normal versus high redundancy in ASM - IPs, hostnames, initial DB names - lack of backup network --- Q: What is exachk and how do you run it? A: exachk is the former "Health Check" for the Exadata machines and performs all sorts of system checks, looking at configuration parameters and listing when things are in error. See Oracle Exadata Database Machine exachk or HealthCheck (Doc ID 1070954.1) this doc shows how to install it and has a link to the latest version. To run: Once unzipped on the DB Node, simply ./exachk --- Q: What is sundiag.sh and how do you run it? A: diagnostic tool you can run on the cell servers to send diagnostic info to Oracle. Steps: - Run as root on the compute node or cell server: - # /opt/oracle.SupportTools/sundiag.sh - Execution will create a date stamped tar.bz2 file in /tmp/sundiag_/tar.bz2 - Upload this file to the Service Request. --- Q: What is a Bloom filter and why is it important in Exadata? A: It is a "probabalistic" data searching structure that depends on probability estimates to find matching results in large data sets. It is designed not to be hyper accurate but instead to quickly eliminate non-candidate rows. It returns either "possibly in set" or "definitely not in set" ... where as exact data matching filters will return either "definitely in set" or "definitely not in set". The probabilistic/guessing nature is what makes the bloom filter fast. see http://www.slideshare.net/quipo/modern-algorithms-and-data-structures-1-bloom-filters-merkle-trees https://en.wikipedia.org/wiki/Bloom_filter --- Q: What are some good exadata-specific interview questions? A: " You're in charge of running OEDA: what are some key pieces of information you want to know before filling it out and handing it off to ACS? " Tell us about common Exadata-specific wait events and what they mean when you see them? " Describe Cell Off-loading/smart scanning; how do you tell if it is actually occurring? " Do you log direction into the cell servers to diagnose performance issues? What tools do you use? " Is it always better to have a query off-load to the storage servers? " Have you used HCC? What are some pros and cons to using it? " Bloom Filtering; can you explain the concept and why its good? " IORM/DBRM: do you recommend using them, why or why not? " Why does Exadata performance improve over time? " How would you diagnose an Exadata-borne SQL statement that suddenly runs 10x longer than expected? " What is your opinion on running gather_system_stats with EXADATA as an option? --- Q: How do you confirm that cell offloading actually occurred? A: Background: in Exadata, when you look at query plans you often see "Storage" off-loading steps but that just means that the step is "eligible" for off-loading. | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------- ... | 20 | TABLE ACCESS STORAGE FULL | PERSON_INFORMATION | 23M| 4597M| | 123K (1)| 00:28:51 | To confirm that off-loading actually occurred on a case by case basis, query v$sql. select sql_text, io_cell_offload_eligible_bytes, io_interconnect_bytes, io_cell_uncompressed_bytes, io_cell_offload_returned_bytes from v$sql where sql_id = 'admdhphbh1a1c'; --- Q: How do I pin a table to flash cache? A: alter table owner.tablename storage (cell_flash_cache keep); ALTER TABLE rdw.FACT_ASMT_ROSTER_SUMMARY_MV1 STORAGE (CELL_FLASH_CACHE KEEP); to unpin: alter table owner.tablename storage (cell_flash_cache default); --- Q: What are some Exadata Specific Hints? Exadata hints? A: /*+ opt_param('cell_offload_processing', 'false') */