22 April, 2009

Bringing ONLINE a Datafile that is in RECOVER mode because it was OFFLINE

A recent forums thread was about a Datafile that had been taken OFFLINE by the DBA and was in RECOVER mode.LinkLink
When a DBA sees a datafile in 'RECOVER' mode (in DBA_DATAFILES) he shouldn't jump to the conclusion that he needs to RESTORE *and* RECOVER the datafile -- particularly if it does exist !
A RESTORE is required only if the file doesn't exist on disk (accessible to the database) OR is corrupted.

In "normal" circumstances (i.e. when a RESTORE has been issued), a RECOVER needs to roll-forward through all the ArchiveLogs that have been generated since the point in time of the datafile -- i.e. the backup of that datafile.

However, if the Datafile was taken OFFLINE, although Oracle marks it as requiring RECOVERy (as is seeen in DBA_DATA_FILES.STATUS), it doesn't really need all the ArchiveLogs. It only needs those ArchiveLogs that captured the Checkpoint of the Datafile and it's being taken OFFLINE. Subsequent ArchiveLogs (no matter how many they were) are not required. Thus, if the file was taken OFFLINE four days ago, as the DBA, I need only the ArchiveLogs that captured the last set of transactions not checkpointed into the Datafile and the ArchiveLog that captured the issuance of the ALTER DATABASE DATAFILE filename OFFLINE command. I do NOT need 4 days of ArchiveLogs.

Here, I first present one case where I do not have the ArchiveLog that captured the DATAFILE OFFLINE command. As is evident, datafile /usr/tmp/test_offline cannot be RECOVERed and brought ONLINE :


SQL> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle_fs/ArchiveLogs/ORT24FS
Oldest online log sequence 1
Next log sequence to archive 2
Current log sequence 2
SQL> create tablespace test_offline datafile '/usr/tmp/test_offline' size 10M ;

Tablespace created.

SQL> alter system switch logfile;

System altered.

SQL> alter database datafile '/usr/tmp/test_offline' offline;

Database altered.

SQL> select * from dba_data_files where file_name like '/usr/tmp/test%';

FILE_NAME
--------------------------------------------------------------------------------
FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS
---------- ------------------------------ ---------- ---------- ---------
RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
------------ --- ---------- ---------- ------------ ---------- -----------
ONLINE_
-------
/usr/tmp/test_offline
6 TEST_OFFLINE AVAILABLE
6
RECOVER


SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> !rm /oracle_fs/ArchiveLogs/ORT24FS/*

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle_fs/ArchiveLogs/ORT24FS
Oldest online log sequence 3
Next log sequence to archive 5
Current log sequence 5
SQL> !ls -l /oracle_fs/ArchiveLogs/ORT24FS
total 0

SQL> alter system switch logfile;

System altered.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle_fs/ArchiveLogs/ORT24FS
Oldest online log sequence 4
Next log sequence to archive 6
Current log sequence 6
SQL> !ls -l /oracle_fs/ArchiveLogs/ORT24FS
total 8
-rw-r----- 1 ora10204 dba 4608 Apr 22 22:33 1_5_684196024.dbf

SQL> alter database datafile '/usr/tmp/test_offline' online;
alter database datafile '/usr/tmp/test_offline' online
*
ERROR at line 1:
ORA-01113: file 6 needs media recovery
ORA-01110: data file 6: '/usr/tmp/test_offline'


SQL> recover datafile 6;
ORA-00279: change 649615 generated at 04/22/2009 22:29:06 needed for thread 1
ORA-00289: suggestion : /oracle_fs/ArchiveLogs/ORT24FS/1_2_684196024.dbf
ORA-00280: change 649615 for thread 1 is in sequence #2


Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
Media recovery cancelled.
SQL> alter database datafile '/usr/tmp/test_offline' online;
alter database datafile '/usr/tmp/test_offline' online
*
ERROR at line 1:
ORA-01113: file 6 needs media recovery
ORA-01110: data file 6: '/usr/tmp/test_offline'


SQL> drop tablespace test_offline including contents and datafiles;

Tablespace dropped.

SQL>

I needed ArchiveLog 2 to be able to issue the RECOVER command. However, as I had (seemingly inadvertently or because it is very old file) removed that ArchiveLog from disk, I cannot RECOVER the datafile. Note, however, that if I did have a Tape backup of ArchiveLogs 2 and 3 I would have been able to RECOVER that datfile and then bring it ONLINE (without requiring Sequences 4 and 5).


In this next scenario, I delete only the subsequent ArchiveLogs after the first one after the ALTER DATABASE DATAFILE filename OFFLINE command. (We can assume that I either preserved the required ArchiveLogs or restored them from backup).


SQL> create tablespace t_o_2 datafile '/usr/tmp/t_o_2.dbf' size 10M;

Tablespace created.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle_fs/ArchiveLogs/ORT24FS
Oldest online log sequence 4
Next log sequence to archive 6
Current log sequence 6
SQL> alter system switch logfile;

System altered.

SQL> alter database datafile '/usr/tmp/t_o_2.dbf' offline;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle_fs/ArchiveLogs/ORT24FS
Oldest online log sequence 6
Next log sequence to archive 8
Current log sequence 8
SQL> !ls -l /oracle_fs/ArchiveLogs/ORT24FS
total 44
-rw-r----- 1 ora10204 dba 4608 Apr 22 22:33 1_5_684196024.dbf
-rw-r----- 1 ora10204 dba 28160 Apr 22 22:36 1_6_684196024.dbf
-rw-r----- 1 ora10204 dba 6656 Apr 22 22:36 1_7_684196024.dbf

SQL> create table t_2 (col_1 number);

Table created.

SQL> insert into t_2 select object_id from dba_objects;

50601 rows created.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

SQL> /

System altered.

SQL> !ls -l /oracle_fs/ArchiveLogs/ORT24FS
total 900
-rw-r----- 1 ora10204 dba 4608 Apr 22 22:33 1_5_684196024.dbf
-rw-r----- 1 ora10204 dba 28160 Apr 22 22:36 1_6_684196024.dbf
-rw-r----- 1 ora10204 dba 6656 Apr 22 22:36 1_7_684196024.dbf
-rw-r----- 1 ora10204 dba 866304 Apr 22 22:38 1_8_684196024.dbf
-rw-r----- 1 ora10204 dba 1536 Apr 22 22:38 1_9_684196024.dbf

SQL> !rm /oracle_fs/ArchiveLogs/ORT24FS/1_8_*.dbf

SQL> !rm /oracle_fs/ArchiveLogs/ORT24FS/1_9_*.dbf

SQL> alter database datafile '/usr/tmp/t_o_2.dbf' online;
alter database datafile '/usr/tmp/t_o_2.dbf' online
*
ERROR at line 1:
ORA-01113: file 6 needs media recovery
ORA-01110: data file 6: '/usr/tmp/t_o_2.dbf'


SQL> recover datafile 6;
ORA-00279: change 649846 generated at 04/22/2009 22:36:10 needed for thread 1
ORA-00289: suggestion : /oracle_fs/ArchiveLogs/ORT24FS/1_6_684196024.dbf
ORA-00280: change 649846 for thread 1 is in sequence #6


Specify log: {=suggested | filename | AUTO | CANCEL}

ORA-00279: change 649874 generated at 04/22/2009 22:36:28 needed for thread 1
ORA-00289: suggestion : /oracle_fs/ArchiveLogs/ORT24FS/1_7_684196024.dbf
ORA-00280: change 649874 for thread 1 is in sequence #7
ORA-00278: log file '/oracle_fs/ArchiveLogs/ORT24FS/1_6_684196024.dbf' no
longer needed for this recovery


Specify log: {=suggested | filename | AUTO | CANCEL}

Log applied.
Media recovery complete.
SQL> select * from dba_data_files where file_name like '/usr/tmp/t_o%';

FILE_NAME
--------------------------------------------------------------------------------
FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS
---------- ------------------------------ ---------- ---------- ---------
RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
------------ --- ---------- ---------- ------------ ---------- -----------
ONLINE_
-------
/usr/tmp/t_o_2.dbf
6 T_O_2 AVAILABLE
6
OFFLINE


SQL> alter database datafile '/usr/tmp/t_o_2.dbf' online;

Database altered.

SQL>


Thus, the RECOVER command for /usr/tmp/t_o_2.dbf required only ArchiveLogs 6 and 7. I did NOT need ArchiveLogs 8, 9 and 10 even though they have been generated since after the particular Datafile was taken OFFLINE.
Therefore, although ArchiveLogs 8, 9 and 10 do capture transactions in *other* Datafiles (and, therefore, would be required if I were to RESTORE and/or RCOVER the other Datafiles), I do not need them for this particular Datafile that was "properly and normally" taken OFFLINE.
As further evidence, see these messages from the alert.log file :

Wed Apr 22 22:36:42 2009
alter database datafile '/usr/tmp/t_o_2.dbf' offline
Wed Apr 22 22:36:42 2009
Completed: alter database datafile '/usr/tmp/t_o_2.dbf' offline
Wed Apr 22 22:36:47 2009
Thread 1 cannot allocate new log, sequence 8
Checkpoint not complete
Current log# 1 seq# 7 mem# 0: /oracle_fs/Databases/ORT24FS/redo01.dbf
Wed Apr 22 22:36:49 2009
Thread 1 advanced to log sequence 8 (LGWR switch)
Current log# 2 seq# 8 mem# 0: /oracle_fs/Databases/ORT24FS/redo02.dbf
Wed Apr 22 22:38:15 2009
Thread 1 cannot allocate new log, sequence 9
Checkpoint not complete
Current log# 2 seq# 8 mem# 0: /oracle_fs/Databases/ORT24FS/redo02.dbf
Wed Apr 22 22:38:16 2009
Thread 1 advanced to log sequence 9 (LGWR switch)
Current log# 3 seq# 9 mem# 0: /oracle_fs/Databases/ORT24FS/redo03.dbf
Thread 1 cannot allocate new log, sequence 10
Checkpoint not complete
Current log# 3 seq# 9 mem# 0: /oracle_fs/Databases/ORT24FS/redo03.dbf
Wed Apr 22 22:38:19 2009
Thread 1 advanced to log sequence 10 (LGWR switch)
Current log# 1 seq# 10 mem# 0: /oracle_fs/Databases/ORT24FS/redo01.dbf
Wed Apr 22 22:39:19 2009
alter database datafile '/usr/tmp/t_o_2.dbf' online
Wed Apr 22 22:39:19 2009
ORA-1113 signalled during: alter database datafile '/usr/tmp/t_o_2.dbf' online...
Wed Apr 22 22:39:25 2009
ALTER DATABASE RECOVER datafile 6
Media Recovery Start
parallel recovery started with 2 processes
ORA-279 signalled during: ALTER DATABASE RECOVER datafile 6 ...
Wed Apr 22 22:39:28 2009
ALTER DATABASE RECOVER CONTINUE DEFAULT
Wed Apr 22 22:39:28 2009
Media Recovery Log /oracle_fs/ArchiveLogs/ORT24FS/1_6_684196024.dbf
ORA-279 signalled during: ALTER DATABASE RECOVER CONTINUE DEFAULT ...
Wed Apr 22 22:39:31 2009
ALTER DATABASE RECOVER CONTINUE DEFAULT
Wed Apr 22 22:39:31 2009
Media Recovery Log /oracle_fs/ArchiveLogs/ORT24FS/1_7_684196024.dbf
Wed Apr 22 22:39:31 2009
Media Recovery Complete (ORT24FS)
Completed: ALTER DATABASE RECOVER CONTINUE DEFAULT
Wed Apr 22 22:40:15 2009
alter database datafile '/usr/tmp/t_o_2.dbf' online
Wed Apr 22 22:40:15 2009
Starting control autobackup
Control autobackup written to DISK device
handle '/oracle_fs/FRAs/ORT24FS/ORT24FS/autobackup/c-4163910544-20090422-03'
Completed: alter database datafile '/usr/tmp/t_o_2.dbf' online



Thus, Oracle needed only ArchiveLogs 6 and 7 even as I had deleted 8 and 9 from disk (and have no backups of 8 and 9).
.
.
.

45 comments:

coskan said...

Very nice and brief explanation covered with clear demonstrations.

Thank you

Anonymous said...

The way you post your blog as a link make reading it a waste of time... so I seek solution elsewhere.

Hemant K Chitale said...

"...so I seek solution elsewhere" might mean that you don't bother to spend enough time to read the post (hint : the solution was in this post). I certainly won't reproduce the whole of the forums thread here as well.
If you are in such a tearing hurry, you should seek a different place...

tjay said...

Hi
Lets say I started hot backup now and it completed.

Now, I restored this backup, since it is inconsistent(last change# of the datafiles is null) database will require recovery.

Let say I have 10 archivelogs and redologs.

Does applying only one archivelog and performing cancel, make the whole database consistent?

I mean after inconsistent backup, if I apply only one archivelog rather than whole archivelogs, does the whole db become consistent?

Is there a general rule for this?

Hemant K Chitale said...

It is not how many archivelogs and redo logs you "have" that matters. What is required is the redo generated between the BEGIN BACKUP and the END BACKUP. If the BEGIN BACKUP was at 12:01pm and the END BACKUP was at 12:42pm and ArchiveLogs 143 to 149 were generated betweeen that time, you need ArchiveLogs 143 to 150. You have to include the first Archive generated after the END BACKUP because the redo at that time was in the Online Redo Log -- which should have been archived immediately thereafter. You cannot rely on the Online Redo Log because it will be overwritten by LGWR operating in a round robin fashion. The ArchiveLogs are not overwritten or deleted until you explicitly delete them (or, in an FRA, Oracle properly determines that they can be deleted).

Hemant

watchah said...

Hi, thanks for this post its very clear and well explained.

I exactly have this problem (oracle 9i), so i hope i still have those archivelogs...and look for them on tape if not

But what should i do if it happens that :
1) I don't have the archivelogs anymore at all
2) I don't want to restore that datafile anyway and if possible to delete it

If i can't do anything is there a risk for the database to keep that dafile in that state?

I know it's a lot of questions but well...thank you in advance if you can take some time to answer!

Hemant K Chitale said...

Watchah,

You could keep that file OFFLINE permanently of course. However, you'd have to watch out for errors in Backup scripts so you have to exclude the OFFLINE file from your backups !

If this was the only file in the Tablespace, you could DROP TABLESPACE tablespacename;

Hemant

watchah said...

Thanks you for your answer!

I don't understand what you exactly mean by watching out errors in my backup scripts? There is an incremential level 0, every month
But it already been launched since my problem and the logs are clean
Do you mean i could have a problem further if i need to restore the whole database?
I can't delete the offline datafile as i have another one in the tablesapce

I've watched after my archivelogs and i don't find any archivelogs in the "Archive destination" directory, even the current. It seems like it's directly on tape, i dunno how as i'm not the base creator but i can see it on tina :o
So when i'll try the recover i jsut hope it will find the files on tape as easily as it saves thems!

tjay said...

Thanks Hemant;
Is there a way to understand wheter the db is consistent or not?
Suppose I applied two archives,Is there an sql query that I can check to make sure db is consistent or I need more archives

tjay said...

and
Is this scenerio same in rman.
Suppose rman backup started at
2:00 and finished at 3:00.
I need all archives between 2:00 and 3:00 to open the db, is that right?

Hemant K Chitale said...

watchah,

you must check and know your backup script. Oracle writing archivelogs direct to tape used to be implemented a very very long time ago (in V6 and V7). I wonder if Oracle even supports archivelogs to tape directly now. It could be a VTL (Virtual Tape Library) that presents tape devices as disk devices. Bottomline : You must know your backup script very well before you can ascertain if you can do a restore. There should be some documentation about the backup script and some documentation about recovery tests ! Else, you must conduct some recovery tests (without overwriting your production database). If you do not have the documentation and/or skills, you might want to engage some consulting organisation for a short period to run the tests and write the documentation.

As for the backup handling offline files -- it goes back to how your backup is written ! I can't comment on the fact that no errors are being returned without knowing how the backups are done.

Hemant K Chitale

Hemant K Chitale said...

tjay,

Oracle does it's own check for the datafiles being consistent when you attempt an ALTER DATABASE OPEN command. An OPEN is not allowed (with or without RESETLOGS) if the files are not consistent.

"consistency" generally means bringing them to the same SCN level, provided that they are not "fuzzy".

Scofield said...

Hi;
If I take the datafile offline, I will need recovery until stop scn of the datafile.
The reason behind this, checkpoint is not issued.
I know the task of checkpoint (write dirty blocks to disk,etc) but what is the relation between
checkpoint and recovery?
Since dirty blocks remain in buffer cache, they can be written when I bring up the datafile.
or
Suppose there is no dirty blocks of that datafile and I take it offline.
Again why do I need recovery?

Hemant K Chitale said...

Test it yourself.
Think of a scenario where the instance was shutdown or crashed after the datafile went offline. What would have happened ?

Hemant

Scofield said...

Since changes are recorded in redologs instance recovery will be performed and nothing would happen

Hemant K Chitale said...

"instance recovery would be performed" cannot co-exist as an assertion with "nothing would happen". The two parts of your statement are mutually exclusive.

To be correct : "datafile recovery would be executed".


Hemant

Ankush Juneja said...

Hi sir

hello

if I want to use transportable tablespace feature in oracle, does the databases have to be
identical?

Suppose I copy one datafile from one database to different database and exp and import metadata.

Since the resetlogs change#,dbname,etc will be different in datafile, how come this happen?

Hemant K Chitale said...

Ankush,

Cross-Platform Tablespace Transport is also supported.
Remember that the Tablespace must be set to READ ONLY when it is being copied out.
See the Database Administrator's Guide.

The headers of the tablespace datafiles will be updated with the DBID , DBNAME and SCNs of the new Database they "belong to" when the are imported into it.


Hemant K Chitale

John Carew said...

1-)
I know that online redologs are not being used in physical standby database.

Suppose I created a physical standby database with standby redologs.
As far as I know online redologs are not used in standby database,(standby redologs are used)
However, when I check the status of the online redologs in standby database I see that:
"CLEARING_CURRENT" or "CLEARING".
Their status is also changing once log switch occurs in production.
Whats the reason for this and why does this status always change, if standby online redologs are not used, their status
should not change.

2-)
What would happen if I delete the online redolog of the standby database from the operating system?
Does the status again show: clearing,clearing current?

Hemant K Chitale said...

John,
I can't currently answer these questions. Remember that I did not post about Standby databases and Standby Logs.

I suggest that you raise your query on forums.oracle.com or raise an SR with Oracle Support.

Hemant K Chitale

Anonymous said...

shutdown abort;
startup mount;

idle> recover database using backup controlfile until cancel;
ORA-00279: change 3678974 generated at 12/12/2009 13:10:32 needed for thread 1
ORA-00289: suggestion :
C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ARCHIVELOG\2009_12_12\O1_MF_1_1_%U_.ARC
ORA-00280: change 3678974 for thread 1 is in sequence #1


Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF'


ORA-01112: media recovery not started


When I check alertlog I notice the below entries.Why do I see these fuzzy warnings in "recover database using backup controlfile"
Whats the reason for this?

ALTER DATABASE RECOVER database using backup controlfile until cancel
Sat Dec 12 13:12:45 2009
Media Recovery Start
WARNING! Recovering data file 1 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 2 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 3 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
WARNING! Recovering data file 4 from a fuzzy file. If not the current file
it might be an online backup taken without entering the begin backup command.
ORA-279 signalled during: ALTER DATABASE RECOVER database using backup controlfile until cancel ...
Sat Dec 12 13:12:47 2009

Hemant K Chitale said...

Anonymous,

The datafiles are fuzzy because
a. they were not in backup mode
and
b. they were open and not closed properly (because of the shutdown abort).

Normally, you'd only need Instance Recovery which is automatically done by STARTUP OPEN. However, since you did a STARTUP MOUNT and then issued a RECOVER DATABASE command, Oracle assumed that these files have been restored from a backup. It then found that the headers were "fuzzy" (because they really weren't from a backup !)

Nothing unusual for this specific case. (Had those files been restored from a backup and Oracle reported those message it would certainly have meant that the backup was improper).

What you really needed was Instance Recovery done by a normal STARTUP, not a Media Recvery.


Hemant K Chitale

Scofield said...

Hi;
If I take the datafile offline, I will need recovery until stop scn of the datafile.

Suppose there is no dirty blocks of that datafile and I take it offline.
why do I need recovery to bring the datafile online?

Hemant K Chitale said...

Scofield,
How would Oracle know that "there are no dirty blocks in the file" ? You know it -- tell me how Oracle would know it and therefore make a decision that the file needs no recovery.

Hemant K Chitale

Scofield said...

Once I bring up the datafile, it can check wheter there are dity blocks or not

Hemant K Chitale said...

Scofield,
Tell me how it will do so ?
Say I have 1 datafile of 32GB that I bring online. Or I have 12 datafiles of 128GB each.
Should I wait for it to check *each* block ?
I presume that you have thought this through and have come up with a plan for Oracle to check the datafile. Send that to Oracle RDBMS development please.

Hemant K Chitale

Scofield said...

Suppose,I forced checkpoint and afterwards I offlined the datafile immediately?
Why do I again need recovery, I have just written all dirty blocks by forcing checkpoint?

Thanks for your help Hemant

scofield said...

Suppose,I forced checkpoint and afterwards I offlined the datafile immediately?
Why do I again need recovery, I have just written all dirty blocks by forcing checkpoint?

Thanks for your help Hemant

Hemant K Chitale said...

What is "immediate" in a database ? A lot can happen in 30seconds. Many blocks can be modified within a single second.

Hemant K Chitale

Hemant K Chitale said...

A comment from "Anonymous" :
Can you tell me how to restore data from oracle data files if operating system gets crashed and i can only copy files from oracle home directory and if drive letter is different of new operating system.

(I have deleted his original comment as it contained his email address)

Hemant K Chitale said...

Anonymous,
If you are to RESTORE to a different location, you can use SET NEWNAME for each datafile in the RMAN command script.

See the example in http://hemantoracledba.blogspot.com/2012/01/datafiles-not-restored-using-vdatafile.html

Hemant K Chitale

Satish Lokhande said...

Hemant Sir, ur r great, whenever I got any problem u have solutions, thanx a lot. From Satish Lokhande

Hemant K Chitale said...

Satish,

Thank you !

Hemant

Ajay said...

Hi hemant,
Suppose i have made datafile offline that belongs to scott user and i want to store online transaction that are made by scott user.How can i store it?

Hemant K Chitale said...

Ajay,
A datafile doesn't "belong" to a user. It is part of a Tablespace (which can contain objects owned by zero to 'n' different "users").
If SCOTT has objects in that tablespace and datafile, you cannot update these objects -- e.g. you cannot execute INSERT/DELETE/UPDATE or even SELECT against tables which have extents in this datafile.

You should create a *new* Tablespace (with it's own datafile), then create *new* tables (different names from existing ones) in the new Tablespace and then insert data into these new tables.

Hemant

vineeth said...

Hi Hemant,

I like to ask you regarding different SCN for a group of data files. Actually it was not clean shutdown when Aix server rebooted.

Now i am having problem with media recovery. 87 data files needs recovery and they got two year old SCN (November 2010)compared to other data files & control file SCN. All of their status is online in mount mode.

While media recovery , its asking to provide archive logs from 2010..we have archive logs available on tape but its around half million archive logs, it will take lot of storage and days to apply.

I think there is something wrong..with an abnormal shutdown and we have to restore logs from 2010..how to check these data files were read only before instance crash.

Vineeth

Hemant K Chitale said...

Vineeth,
It seems that this datafiles were offline before the reboot.
OR were the tablespaces taken OFFLINE two years ago ?

Hemant

vineeth said...

Hello Hemant,

database was up & running and all data files were online before crash.

I am not sure these tablespaces were taken offline two years ago..if yes then it will be the reason.

while applying archivelogs..its so fast , oracle just saying its no longer needed for recovery..

Vineeth

orcl11gdba said...

some how complex .but good explanantion

Anonymous said...

Hemant's

Thanks for the post it is old but still valid. I have one question.

What if you are creating physical standby database from the database having one datafile with RECOVER state, because duplicate target database for standby from active database command is attempting to recover that unrecoverable datafile hence it fail

please how can i handle this situation?

Hemant K Chitale said...

Anonymous,
I would RESTORE from a Backup and take the datafile OFFLINE. There would be no Recovery of this datafile. Then use a Standby controlfile created from the Primary to make this restored database a Standby database and proceed with Managed Recovery.

Hemant

Unknown said...

Hi,I have a totally different situation.

My database is a test db runningin archive log mode but with NO BACKUP.

I have a tablespace OGG_TS with 2 datafiles but both the datafiles have been deleted physically from the +ASM.

Is there any way i can drop this tablespace and get rid of this tablespace?

Regards
Azmat

Hemant K Chitale said...

Azmat,
Did you try ALTER DATABASE DATAFILE n OFFLINE ?

You could then issue an DROP TABLESPACE tbsname INCLUDING CONTENTS AND DATAFILES


Hemant

Anonymous said...

HI,

What I don't understand is why i can put a tablespace offline normal (and the tablespace WON'T require media recovery), but a datafile put offline DOES require media recovery.

Thanks

Hemant K Chitale said...

An ALTER DATABASE DATAFILE OFFLINE does NOT cause a Checkpoint. So Recovery is required for the datafile.
The default behaviour of ALTER TABLESPACE OFFLINE is OFFLINE NORMAL which does cause a Checkpoint. This does not require subsequent Recovery.