Tuesday, July 10, 2012


As an application DBA, I was asked if I would be impacted if the system DBA revoked EXP_FULL_DATABASE from the application-DBA role.  I am frequently asked to perform export-import operations on a number of different schemas, and I prefer to do that work as myself rather than the schema owners, so I responded "Yes, there will be significant impact. What workaround do you propose?" Cue soundbite of crickets chirping.


The System DBA's concern is that EXP_FULL_DATABASE is a highly privileged role with implicit rights that must be protected; SELECT ANY TABLE and EXECUTE ANY PROCEDURE, in particular.  Their concern is well-founded.  With EXECUTE ANY PROCEDURE you can run absolutely anything in the database, and there are procedures owned by SYS that can be misused by a malicious user to take control of the database.

This presents a dilemma.  I need to be able to export any schema and import into another database on demand to support development and testing efforts.  Without EXP_FULL_DATABASE, what are my options?

* Use legacy exp/imp as the schema owner
* Use Data Pump expdp/impdp as the schema owner
* Use Data Pump over a NETWORK LINK, but only as the schema owner.

Are you sensing the same theme I am?  Let me add another twist.  I do not necessarily, as an application DBA, have access to the database server itself.  Unless I negotiate access to a dump directory on the server I cannot get to any exports created by data pump export. 

I personally like to keep data-owning schema passwords local to the database, and different in each environment.  This protects the production data (access to development does not imply access to production), protects the schema structure (I must have the connection and password right for a given environment as an extra precaution in running update scripts), and empowers developers (they can have access to change structure as they need in development without affecting test or production).

And finally, another security wrinkle... Development and Test practices currently include making copies of production data available in test databases to test solutions to data-related issues outside of production.  This is a good testing practice, but a common and poor data security practice.  That issue is a topic for another post.  For the sake of this discussion we will continue under the premise that copying production to test is a firm requirement.

So the complete challenge is: how can I effectively support my development and test community without going through the System DBA to perform exports and transfer export files, and without further compromising security?

Let's explore each option. The solutions in this post will assume that I know the source and target database passwords for each schema I need to manage, including production.  Each option will not the implications if the source password is not known.


Using the legacy export and import utilities as the schema owner is the simplest option.  It has the inherent risk that I must know the schema password for the source database.  If the source is production, that may present an information security risk. If I cannot know the production password then a system DBA must perform the export for me.

Export and Import served us for many versions, but today they show their age.  Parallelism is limited (do it yourself) and restarting the process means modifying parameter files to force it to start and stop where you need.  It does not have the same monitoring or exclusion features as data pump, so it is not my preference if I can avoid it.

If I know the schema passwords for the source and target databases, this can work.  It is certainly not ideal, but it does allow me to maintain different passwords in each environment, since exp and imp are separate steps.


If I have access to the data pump directory on the database server, this may be an option. Its benefits over legacy exp/imp are in the enhanced restart, monitoring, parallelism, and exclusion features of data pump, which are well documented elsewhere.  It suffers from the same limitation as EXP/IMP in that I must know the source database password for the schema I want to export.  If I do not have access to the data pump dump directory on the database server, I am again in need of the system DBA to be involved in routine data copy activities which may not be the best use of their time.

If I have access to the data pump exports and the schema passwords in the source and target databases, this can also work.  It does not require the schema password to be the same in the source and target databases, since expdp and impdp are separate steps.


Data pump has the option to skip the intermediate export file altogether.  While this may be slower and more network round-trips than using a disk, you do not need access to the data pump dump directory on the database server.  In my case, this is an advantage.  With EXP_FULL_DATABASE on the source I can have a generic database link defined from target to source, and connect as myself (my ID is LDAP-authenticated and the same password for any database). Without EXP_FULL_DATABASE on the source, the NETWORK_LINK option is not available to copy different user's schema.

Generic database links require the user to have the same password in both databases.  I can define a user-owned database link, but that leaves the password in plain text in the USER_DB_LINKS view in the target.  Even if I only create the link while I am moving data, if this is a development database anyone with the schema password can see the database link definition while I am performing the copy.

One solution is to change the target database password while I am performing a copy.  This locks out anyone else who does know the password, preventing access while the copy is occurring.  This also means I can create the database link to the source, perform the copy, then drop the database link before resetting the schema password to its usual value.  This prevents others from learning the source database password by querying USER_DB_LINKS in the target database. 


 If the source schema password is unknown then we must come up with some other way to run an export for a different schema.  This requires some analysis of the PL/SQL behind expdp and impdp to determine the equivalent privileges needed without SELECT ANY TABLE and EXECUTE ANY PROCEDURE.

The next post in this series will include the results of that investigation and any additional strategies that come from it.


Tuesday, April 24, 2012

Considering the Cloud? You Might Already Be There.

I'm at IOUG Collaborate 12 this week, and one of the sessions I attended gave me a new perspective on "The Cloud", or at least what it means to have and use a "Private Cloud". I've always had a bit of an eye-roll response to the concept of a Private Cloud - why put a trendy name on something as commonplace and mundane as putting more than one thing on a server/SAN/mainframe, which was my view of a private cloud.

The session I attended was by Michael Timpanaro-Perrotta, of Oracle Corporation. The prentation was one of the more enlightening overviews of what cloud architecture is that I have come across. I won't re-present Michael's work here, but will share a few of my "Aha!" moments.

Aha Moment #1

There is nothing magical about "the cloud". A cloud is presenting technology as a standardized service. It is less about the technology itself or where it is hosted and more about how the service is deployed and delivered. If we are deploying servers in a cowhere fashion where the upkeep becomes transparent to our customers, that is one type of cloud.

Aha Moment #2

In the database sense, a cloud database can be a centrally-managed, consolidated instance that serves a variety of customers and applications. It turns out much of my career has been "cloudy", in this sense. Whenever I have the opportunity, I am a proponent of consolidating instances for ease of manageability and reduced maintenance cost. These are key cloud concepts - common, standardized deployments that serve multiple purposes.

Aha Moment #3

I'm building a private cloud database today. Who knew? Our project is labelled a "server refresh" - nothing about "cloud" in the name, design, or requirements - but we are consolidating similar instances with compatible workload into a single instance. We are migrating from a dedicated host to a shared O/S infrastructure. Eventually this database will move from the shared O/S cluster into an even-more-shared RAC cluster. This is the very process Michael layed out in discussing how organizations move from silos into a private cloud model.

"Cloud" may be the buzzword of the year, but it is not as mysterious as you may think. If you look at your current technology projects, those that simplify things, standardize, virtualize, or share architecture at some level may very well be cloud projects.