Download Oracle 11.2.0.4 For Windows
8i 9i 10g 11g 12c 13c 18c 19c 21c Misc PL/SQL SQL RAC WebLogic Linux
Home » Articles » 11g » Here
- Oracle Database 11g Release 2 Express Edition for Windows 64; Oracle Database 11g Release 2 Express Edition for Linux x86 and Windows; Previous Database Release Software. Oracle Database Enterprise Edition 10.2, 11.x, 12.x, and 18c are available as a media or FTP request for those customers who own a valid Oracle Database product license for.
- Oracle has released the patchset 11.2.0.4.0 for Oracle 11g Release 2. The current patchset is as well as the other 11g R2 patchsets a full installation. This means you will have to download quite a bit from Metalink, altogether 7 files.
This article describes the installation of Oracle Database 11g release 2 (11.2.0.3 64-bit) RAC on Linux (Oracle Linux 6.3 64-bit) using VirtualBox (4.2.6) with no additional shared disk devices.
Anyone know where I can download oracle database 11.2.0.4 for windows 64bit? Any reason why website below shows 11.2.0.2 only and 11.2.0.4 (uinstall only). I'm sorry to say but I find this hidious.:/ Oracle Database 11g Release 2 for Microsoft Windows (x64) 2).
- Oracle Installation Prerequisites
Related articles.
Introduction
One of the biggest obstacles preventing people from setting up test RAC environments is the requirement for shared storage. In a production environment, shared storage is often provided by a SAN or high-end NAS device, but both of these options are very expensive when all you want to do is get some experience installing and using RAC. A cheaper alternative is to use a FireWire disk enclosure to allow two machines to access the same disk(s), but that still costs money and requires two servers. A third option is to use virtualization to fake the shared storage.
Using VirtualBox you can run multiple Virtual Machines (VMs) on a single server, allowing you to run both RAC nodes on a single machine. In addition, it allows you to set up shared virtual disks, overcoming the obstacle of expensive shared storage.
Before you launch into this installation, here are a few things to consider.
- The finished system includes the host operating system, two guest operating systems, two sets of Oracle Grid Infrastructure (Clusterware + ASM) and two Database instances all on a single server. As you can imagine, this requires a significant amount of disk space, CPU and memory.
- Following on from the last point, the VMs will each need at least 3G of RAM, preferably 4G if you don't want the VMs to swap like crazy. As you can see, 11gR2 RAC requires much more memory than 11gR1 RAC. Don't assume you will be able to run this on a small PC or laptop. You won't.
- This procedure provides a bare bones installation to get the RAC working. There is no redundancy in the Grid Infrastructure installation or the ASM installation. To add this, simply create double the amount of shared disks and select the 'Normal' redundancy option when it is offered. Of course, this will take more disk space.
- During the virtual disk creation, I always choose not to preallocate the disk space. This makes virtual disk access slower during the installation, but saves on wasted disk space. The shared disks must have their space preallocated.
- This is not, and should not be considered, a production-ready system. It's simply to allow you to get used to installing and using RAC.
- The Single Client Access Name (SCAN) should be defined in the DNS or GNS and round-robin between one of 3 addresses, which are on the same subnet as the public and virtual IPs. Prior to 11.2.0.2 it could be defined as a single IP address in the '/etc/hosts' file, which is wrong and will cause the cluster verification to fail, but it allowed you to complete the install without the presence of a DNS. This does not seem to work for 11.2.0.2 onward.
- The virtual machines can be limited to 2Gig of swap, which causes a prerequisite check failure, but doesn't prevent the installation working. If you want to avoid this, define 3+Gig of swap.
- This article uses the 64-bit versions of Oracle Linux and Oracle 11g Release 2.
- When doing this installation on my server, I split the virtual disks on to different physical disks ('/u02', '/u03', '/u04'). This is not necessary, but makes things run a bit faster.
Download Software
Download the following software.
VirtualBox Installation
First, install the VirtualBox software. On RHEL and its clones you do this with the following type of command as the root user.
The package name will vary depending on the host distribution you are using. Once complete, VirtualBox is started from the 'Applications > System Tools > Oracle VM VirtualBox' menu option.
Virtual Machine Setup
Now we must define the two virtual RAC nodes. We can save time by defining one VM, then cloning it when it is installed.
Start VirtualBox and click the 'New' button on the toolbar. Enter the name 'ol6-112-rac1', OS 'Linux' and Version 'Oracle (64 bit)', then click the 'Next' button.
Enter '4096' as the base memory size, then click the 'Next' button.
Accept the default option to create a new virtual hard disk by clicking the 'Create' button.
Acccept the default hard drive file type by clicking the 'Next' button.
Acccept the 'Dynamically allocated' option by clicking the 'Next' button.
Accept the default location and set the size to '30G', then click the 'Create' button. If you can spread the virtual disks onto different physical disks, that will improve performance.
The 'ol6-112-rac1' VM will appear on the left hand pane. Scroll down the 'Details' tab on the right and click on the 'Network' link.
Make sure 'Adapter 1' is enabled, set to 'Bridged Adapter', then click on the 'Adapter 2' tab.
Make sure 'Adapter 2' is enabled, set to 'Bridged Adapter' or 'Internal Network', then click on the 'System' section.
Move 'Hard Disk' to the top of the boot order and uncheck the 'Floppy' option, then click the 'OK' button.
The virtual machine is now configured so we can start the guest operating system installation.
Guest Operating System Installation
With the new VM highlighted, click the 'Start' button on the toolbar. On the 'Select start-updisk' screen, choose the relevant Oracle Linux ISO image and click the 'Start' button.
The resulting console window will contain the Oracle Linux boot screen.
Continue through the Oracle Linux 6 installation as you would for a basic server. A general pictorial guide to the installation can be found here. More specifically, it should be a server installation with a minimum of 4G+ swap, firewall disabled, SELinux set to permissive and the following package groups installed:
- Base System > Base
- Base System > Client management tools
- Base System > Compatibility libraries
- Base System > Hardware monitoring utilities
- Base System > Large Systems Performance
- Base System > Network file system client
- Base System > Performance Tools
- Base System > Perl Support
- Servers > Server Platform
- Servers > System administration tools
- Desktops > Desktop
- Desktops > Desktop Platform
- Desktops > Fonts
- Desktops > General Purpose Desktop
- Desktops > Graphical Administration Tools
- Desktops > Input Methods
- Desktops > X Window System
- Applications > Internet Browser
- Development > Additional Development
- Development > Development Tools
To be consistent with the rest of the article, the following information should be set during the installation:
- hostname: ol6-112-rac1.localdomain
- IP Address eth0: 192.168.0.111 (public address)
- Default Gateway eth0: 192.168.0.1 (public address)
- IP Address eth1: 192.168.1.111 (private address)
- Default Gateway eth1: none
You are free to change the IP addresses to suit your network, but remember to stay consistent with those adjustments throughout the rest of the article.
Oracle Installation Prerequisites
Perform either the Automatic Setup or the Manual Setup to complete the basic prerequisites. The Additional Setup is required for all installations.
Automatic Setup
If you plan to use the 'oracle-rdbms-server-11gR2-preinstall' package to perform all your prerequisite setup, follow the instructions at http://public-yum.oracle.com to setup the yum repository for OL, then perform the following command.
All necessary prerequisites will be performed automatically.
It is probably worth doing a full update as well, but this is not strictly speaking necessary.
Manual Setup
If you have not used the 'oracle-rdbms-server-11gR2-preinstall' package to perform all prerequisites, you will need to manually perform the following setup tasks.
In addition to the basic OS installation, the following packages must be installed whilst logged in as the root user. This includes the 64-bit and 32-bit versions of some packages. The commented out packages are those already installed if you have followed the suggested package selection.
Add or amend the following lines to the '/etc/sysctl.conf' file.
Run the following command to change the current kernel parameters.
Add the following lines to the '/etc/security/limits.conf' file.
Add the following lines to the '/etc/pam.d/login' file, if it does not already exist.
Create the new groups and users.
Additional Setup
Perform the following steps whilst logged into the 'ol6-112-rac1' virtual machine as the root user.
Ford ka mk1 user manual pdf. Set the password for the 'oracle' user.
Install the following package from the Oracle grid media after you've defined groups.
If you are not using DNS, the '/etc/hosts' file must contain the following information.
Even with the SCAN address defined in the hosts file, it still needs to be defined on the DNS to round-robin between 3 addresses on the same subnet as the public IPs. The DNS configuration is described here. Having said that, I normally include everything except the SCAN entries when using DNS.
Amend the '/etc/security/limits.d/90-nproc.conf' file as described below. See MOS Note [ID 1487773.1]
Change the setting of SELinux to permissive by editing the '/etc/selinux/config' file, making sure the SELINUX flag is set as follows.
If you have the Linux firewall enabled, you will need to disable or configure it, as shown here or here. The following is an example of disabling the firewall.
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization Service (ctssd) can synchronize the times of the RAC nodes. If you want to deconfigure NTP do the following.
If you want to use NTP, you must add the '-x' option into the following line in the '/etc/sysconfig/ntpd' file.
Then restart NTP.
Create the directories in which the Oracle software will be installed.
Log in as the 'oracle' user and add the following lines at the end of the '/home/oracle/.bash_profile' file.
Create a file called '/home/oracle/grid_env' with the following contents.
Create a file called '/home/oracle/db_env' with the following contents.
Once the '/home/oracle/.bash_profile' has been run, you will be able to switch between environments as follows.
We've made a lot of changes, so it's worth doing a reboot of the VM at this point to make sure all the changes have taken effect.
Install Guest Additions
Click on the 'Devices > Install Guest Additions' menu option at the top of the VM screen. If you get the option to auto-run take it. If not, then run the following commands.
The VM will need to be restarted for the additions to be used properly. The next section requires a shutdown so no additional restart is needed at this time.
Create Shared Disks
Shut down the 'ol6-112-rac1' virtual machine using the following command.
On the host server, create 4 sharable virtual disks and associate them as virtual media using the following commands. You can pick a different location, but make sure they are outside the existing VM directory.
Start the 'ol6-112-rac1' virtual machine by clicking the 'Start' button on the toolbar. When the server has started, log in as the root user so you can configure the shared disks. The current disks can be seen by issuing the following commands.
Use the 'fdisk' command to partition the disks sdb to sde. The following output shows the expected fdisk output for the sdb disk.
In each case, the sequence of answers is 'n', 'p', '1', 'Return', 'Return' and 'w'.
Once all the disks are partitioned, the results can be seen by repeating the previous 'ls' command.
Configure your UDEV rules, as shown here.
Add the following to the '/etc/scsi_id.config' file to configure SCSI devices as trusted. Create the file if it doesn't already exist.
The SCSI ID of my disks are displayed below.
Using these values, edit the '/etc/udev/rules.d/99-oracle-asmdevices.rules' file adding the following 4 entries. All parameters for a single entry must be on the same line.
Load updated block device partition tables.
Test the rules are working as expected.
Reload the UDEV rules and start UDEV.
The disks should now be visible and have the correct ownership using the following command. If they are not visible, your UDEV configuration is incorrect and must be fixed before you proceed.
The shared disks are now configured for the grid infrastructure.
Clone the Virtual Machine
Later versions of VirtualBox allow you to clone VMs, but these also attempt to clone the shared disks, which is not what we want. Instead we must manually clone the VM.
Shut down the 'ol6-112-rac1' virtual machine using the following command.
Manually clone the 'ol6-112-rac1.vdi' disk using the following commands on the host server.
Create the 'ol6-112-rac2' virtual machine in VirtualBox in the same way as you did for 'ol6-112-rac1', with the exception of using an existing 'ol6-112-rac2.vdi' virtual hard drive.
Remember to add the second network adaptor as you did on the 'ol6-112-rac1' VM. When the VM is created, attach the shared disks to this VM.
Start the 'ol6-112-rac2' virtual machine by clicking the 'Start' button on the toolbar. Ignore any network errors during the startup.
Log in to the 'ol6-112rac2' virtual machine as the 'root' user so we can reconfigure the network settings to match the following.
- hostname: ol6-112-rac2.localdomain
- IP Address eth0: 192.168.0.112 (public address)
- Default Gateway eth0: 192.168.0.1 (public address)
- IP Address eth1: 192.168.1.112 (private address)
- Default Gateway eth1: none
Amend the hostname in the '/etc/sysconfig/network' file.
Download Oracle 11.2.0.4 For Windows
Check the MAC address of each of the available network connections. Don't worry that they are listed as 'eth2' and 'eth3'. These are dynamically created connections because the MAC address of the 'eth0' and 'eth1' connections is incorrect.
Edit the '/etc/sysconfig/network-scripts/ifcfg-eth0', amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the 'eth2' interface displayed above.
Edit the '/etc/sysconfig/network-scripts/ifcfg-eth1', amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the 'eth3' interface displayed above.
If the adapter names do not reset properly, check the HWADDR
in the '/etc/udev/rules.d/70-persistent-net.rules' file. If it is incorrect, amend it to match the settings described above.
Edit the '/home/oracle/.bash_profile' file on the 'ol6-112-rac2' node to correct the ORACLE_SID and ORACLE_HOSTNAME values.
Also, amend the ORACLE_SID setting in the '/home/oracle/db_env' and '/home/oracle/grid_env' files.
Restart the 'ol6-112-rac2' virtual machine and start the 'ol6-112-rac1' virtual machine. When both nodes have started, check they can both ping all the public and private IP addresses using the following commands.
At this point the virtual IP addresses defined in the '/etc/hosts' file will not work, so don't bother testing them.
Download Oracle 11.2.0.4 For Windows 32
Check the UDEV rules are working on both machines. In previous versions of OL6 the '/etc/udev/rules.d/99-oracle-asmdevices.rules' file copied between servers during the clone without any issues. For some reason, this doesn't seem to happen on my OL6.3 installations, so you may need to repeat the UDEV configuration on the second node if the output of the following command is not consistent on both nodes.
Prior to 11gR2 we would probably use the 'runcluvfy.sh' utility in the clusterware root directory to check the prerequisites have been met. If you are intending to configure SSH connectivity using the installer this check should be omitted as it will always fail. If you want to setup SSH connectivity manually, then once it is done you can run the 'runcluvfy.sh' with the following command.
If you get any failures be sure to correct them before proceeding.
The virtual machine setup is now complete.
Before moving forward you should probably shut down your VMs and take snapshots of them. If any failures happen beyond this point it is probably better to switch back to those snapshots, clean up the shared drives and start the grid installation again. An alternative to cleaning up the shared disks is to back them up now using zip and just replace them in the event of a failure.
Install the Grid Infrastructure
Make sure both virtual machines are started, then login to 'ol6-112-rac1' as the oracle user and start the Oracle installer.
Select the 'Skip software updates' option, then click the 'Next' button.
Select the 'Install and Configure Oracle Grid Infrastructure for a Cluster' option, then click the 'Next' button.
Select the 'Typical Installation' option, then click the 'Next' button.
On the 'Specify Cluster Configuration' screen, enter the correct SCAN Name and click the 'Add' button.
Enter the details of the second node in the cluster, then click the 'OK' button.
Click the 'SSH Connectivity..' button and enter the password for the 'oracle' user. Click the 'Setup' button to to configure SSH connectivity, and the 'Test' button to test it once it is complete.
Click the 'Identify network interfaces..' button and check the public and private networks are specified correctly. Once you are happy with them, click the 'OK' button and the 'Next' button on the previous screen.
Enter '/u01/app/11.2.0.3/grid' as the software location and 'Automatic Storage Manager' as the cluster registry storage type. Enter the ASM password, select 'dba' as the group and click the 'Next' button.
Set the redundancy to 'External', click the 'Change Discovery Path' button and set the path to '/dev/asm*'. Return the main screen and select all 4 disks and click the 'Next' button.
Accept the default inventory directory by clicking the 'Next' button.
Wait while the prerequisite checks complete. If you have any issues, either fix them or check the 'Ignore All' checkbox and click the 'Next' button.
If you are happy with the summary information, click the 'Install' button.
Wait while the setup takes place.
When prompted, run the configuration scripts on each node.
The output from the 'orainstRoot.sh' file should look something like that listed below.
The output of the root.sh will vary a little depending on the node it is run on. Example output can be seen here (Node1, Node2).
Once the scripts have completed, return to the 'Execute Configuration Scripts' screen on 'rac1' and click the 'OK' button.
Wait for the configuration assistants to complete.
We expect the verification phase to fail with an error relating to the SCAN, assuming you are not using DNS.
Provided this is the only error, it is safe to ignore this and continue by clicking the 'Next' button.
Click the 'Close' button to exit the installer.
The grid infrastructure installation is now complete.
Install the Database
Make sure the 'ol6-112-rac1' and 'ol6-112-rac2' virtual machines are started, then login to 'ol6-112-rac1' as the oracle user and start the Oracle installer.
Uncheck the security updates checkbox and click the 'Next' button and 'Yes' on the subsequent warning dialog.
Check the 'Skip software updates' checkbox and click the 'Next' button.
Accept the 'Create and configure a database' option by clicking the 'Next' button.
Accept the 'Server Class' option by clicking the 'Next' button.
Make sure both nodes are selected, then click the 'Next' button.
Accept the 'Typical install' option by clicking the 'Next' button.
Enter '/u01/app/oracle/product/11.2.0.3/db_1' for the software location. The storage type should be set to 'Automatic Storage Manager'. Enter the appropriate passwords and database name, in this case 'RAC.localdomain'.
Wait for the prerequisite check to complete. If there are any problems either fix them, or check the 'Ignore All' checkbox and click the 'Next' button.
If you are happy with the summary information, click the 'Install' button.
Wait while the installation takes place.
Once the software installation is complete the Database Configuration Assistant (DBCA) will start automatically.
Once the Database Configuration Assistant (DBCA) has finished, click the 'OK' button.
When prompted, run the configuration scripts on each node. When the scripts have been run on each node, click the 'OK' button.
Click the 'Close' button to exit the installer.
The RAC database creation is now complete.
Check the Status of the RAC
There are several ways to check the status of the RAC. The srvctl
utility shows the current configuration and status of the RAC database.
The V$ACTIVE_INSTANCES
view can also display the current status of the instances.
If you have configured Enterprise Manager, it can be used to view the configuration and current status of the database using a URL like 'https://ol6-112-rac1.localdomain:1158/em'.
For more information see: /dreamscene-windows-10.html.
Hope this helps. Regards Tim..