IP Address Management (IPAM)

IPAM is a means of planning, tracking, and managing the Internet Protocol address space, where an address space defines a range of discrete addresses used in a network.

IPAM integrates DOMAIN NAME SYSTEM (DNS) and DYNAMIC HOST CONFIURATION PROTOCOL(DHCP) so that each is aware of changes in the other (for instance DNS knowing of the IP address taken by a client via DHCP, and updating itself accordingly). Additional functionality, such as controlling reservations in DHCP as well as other data aggregation and reporting capability, is also common.

IPAM tools are increasingly important as new IPv6 networks are deployed with larger address pools, different subnetting techniques, and more complex 128-bit hexadecimal numbers which are not as easily human-readable as IPv4 addresses. IPv6 networking, mobile computing, and multihoming require more dynamic address management. With IPAM, administrators can ensure that the inventory of assignable IP addresses remains current and sufficient.

IP Address Management (IPAM) in Windows Server 2012 and Windows Server 2012 R2 is an integrated suite of tools to enable end-to-end planning, deploying, managing and monitoring of your IP address infrastructure, with a rich user experience. IPAM automatically discovers IP address infrastructure servers on your network and enables you to manage them from a central interface.

IPAM includes components for:

  1. Address Space Management
  2. Virtual Address Space Management
  3. Multi-Server Management and Monitoring
  4. Network Audit
  5. Role-based access controlVirtual IP address space management is enabled through integration of IPAM with system center virtual machine manager and is available in Windows Server 2012 R2 and later operating systems. This feature is not available with IPAM in Windows Server 2012.

Role-based access control is available in Windows Server 2012 using local user groups on the IPAM server. This feature was significantly enhanced in Windows Server 2012 R2 to include detailed built-in and custom role-based access groups.


Network administrators use IPAM  to update various details about their networks:

  • How much free IP address space exists.
  • What subnets are in use, how large they are, and who uses them.
  • Permanent versus temporary status for each IP address.
  • Default routers that the various network devices use.
  • The host name associated with each IP address.
  • The specific hardware associated with each IP address.


  1. Add the IPAM Role to Window Server 2012 R2
  2. Open Server Manager and add the IPAM server
  3. Open the IPAM node
  4. Using the Quick Start select Provision the IPAM Server


5.Read the information at the start of the Wizard and click Next

6.On the Configure Database screen select to either use the WID or SQL Server, I chose WID and click Nextcipheronic27.On the Select Provisiong Method screen select Group Policy Based and enter a prefix for the IPAM GPOs, I used IPAM, click Next.cipheronic3

8.Read the summary and hit Apply

9.When the wizard has completed read the summary and click Close

10.Back at the IPAM Quick Start select the Configure Server Discovery link

11.Select the domain that we want to add to the discovery scope from the drop down box and click Add, check the types of roles to discover, I checked them all, then click OK.cipheronic412.On the IPAM Quick Start select step 4 Start Server Discovery and wait for the discovery to finish

13.On the IPAM Quick Start select step 5 Select or add servers to manage and verify IPAM access

14.At this point my server said Set Manageability Status with a warning sign. So Right Click the server and select Edit Server.


15.Set it’s status to Managed and check the correct Server Types have been picked up then click OK.

cipheronic616.Next sever showed up as blocked, there are a couple of reasons for this. First we need to make sure the server has the GPOs applied so connect to the server in question.cipheronic717.First check that the GPOs exist, open the Group Policy Management console and visually identifying them – they should have a IPAM_ prefix if you did that earlier

18.If they don’t exist then provision them with this PowerShell, changing the appropriate parameters for your environment.

Invoke-IpamGpoProvisioning -Domain contoso.com -GpoPrefixName IPAM -IpamServerFqdn ipam.contoso.com -DomainController orange-dc.contoso.com

19.Again verify that the GPOs exist

20.Now need to change the security filtering on the IPAM GPOs to include our server so add the servers

cipheronic821.We then need to apply the GPO to our servers using gpupdate / force

22.To be sure the policies have applied we can run gpresult /r and should see the IPAM GPOs listed

cipheronic923.Next we need to allow our IPAM server to view the event logs on our servers so add the IPAM server to the Event Log Readers AD group. I used ADAC but you could use PowerShell like this:

Set-ADGroup -Add:@{'Member'="CN=IPAM,CN=Computers,DC=Contoso,DC=com"} -Identity:"CN=Event Log Readers,CN=Builtin,DC=Contoso,DC=com" -Server:"Orange-DC.Contoso.com"

24.Return to Server Manager to the IPAM node, select the Server Inventory Node, right click the server in question and select Refresh Server Access Status then refresh Server Manager. The status should turn to IPAM Unblocked.


Install Exchange 2013 (SP1) on Windows Server 2012 R2

Exchange 2013  SP1 was released in February this year providing support for Windows Server 2012 R2. In this blog we’ll run through the installation process.

The demo environment I am using includes a Windows Server 2012 R2 domain controller and a Server 2012 R2 member server.

In the demo environment no previous versions of Exchange have been installed so as part of the installation the Exchange 2013 SP1 we will upgrade the AD Schema, even if you are running Exchange 2013, the installation of SP1 requires a Schema update. Note in this scenario we are going to jump straight to installing Exchange 2013 SP1, without installing Exchange 2013 first.

Finally before we start, always test in a demo environment before deploying in Production!

I hope this walk through helps.


1. On your 2012 R2 member server, download Exchange 2013 SP1, see here for the latest version here. Note: check for the latest cumulative Update and install directly from that to save you patching the install of SP1 once complete, currently the latest is CU11 released on 15/12/2015.

2. Once downloaded extract the files by running the Exchange 2013-x64-SP1 executable. In my environment I have extracted them to C:\Sw\Exchange2013SP1.

3. On a Server 2012 R2 member server, run PowerShell as Administrator.

PowerShell Run as Administrator

4. Run the following command to install the Active Directory Remote Administration Tool (Source: Exchange 2013 Prerequisites)

Install-WindowsFeature RSAT-ADDS

Install-WindowsFeature RSAT-ADDS

5. In the same PowerShell Window run the following command to prepare the server for the Mailbox or CAS server roles (Source: Exchange 2013 Prerequisites):

Install-WindowsFeature AS-HTTP-Activation, Desktop-Experience, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Clustering-CmdInterface, RSAT-Clustering-Mgmt, RSAT-Clustering-PowerShell, Web-Mgmt-Console, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Lgcy-Mgmt-Console, Web-Metabase, Web-Mgmt-Console, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, Windows-Identity-Foundation

Install-WindowsFeature for Exchanfe Mailbox and CAS Roles

6. Reboot the server to complete the installation of the Windows Features.

7. Download and install Microsoft Unified Communications Managed API 4.0, Core Runtime 64-bit (Source: Exchange 2013 Prerequisites).

Microsoft Unified Communications Managed API 4.0 RuntimeMicrosoft Unified Communications Managed API 4.0 Runtime 1Microsoft Unified Communications Managed API 4.0 Runtime 2Microsoft Unified Communications Managed API 4.0 Runtime 3Microsoft Unified Communications Managed API 4.0 Runtime 4

8. On the 2012 R2 member server, run Cmd as Administrator.

Run CMD as Administrator

9. Now let’s prepare the domain for Exchange 2013 SP1. Go to the location where you extracted Exchange 2013 SP1 installation files (C:\Sw\Exch2013SP1).

First run setup.exe /help to list the help options available to you.

Exchange 2013 SP1 Setup help

As this is the first Exchange Server in our environment we need to prepare the topology so next run setup.exe /help:preparetopology.

Exchange 2013 SP1 setup help preparetopology

The three commands we are interested in are the /PrepareSchema, /PrepareAD, /PrepareDomain, but as this is a new installation we’ll also need to use the /OrganizationName switch

10. So let’s run the first command:

Setup.exe /PrepareSchema

iacceptexchangeserverlicenseterms warning messageYou’ll see without adding the additional switch /IAcceptExchangeServerLicenseTerms we get a warning and the installation goes no further.

So let’s run the first command in full:

Setup.exe /PrepareSchema /IAcceptExchangeServerLicenseTerms

setup prepareschema iacceptexchangeserverlicenseterms 1setup prepareschema iacceptexchangeserverlicenseterms 2setup prepareschema iacceptexchangeserverlicenseterms 3

11. Now we run Setup /PrepareAD with the /OrganizationName paramter as this is a new Exchange installation:

Setup.exe /PrepareAD /OrganizationName:OxfordSBSGuy /IAcceptExchangeServerLicenseTerms

exchange 2013 sp1 setup preparead organizationname oxfordsbsguy iacceptexchangeserverlicenseterms 1exchange 2013 sp1 setup preparead organizationname oxfordsbsguy iacceptexchangeserverlicenseterms 2exchange 2013 sp1 setup preparead organizationname oxfordsbsguy iacceptexchangeserverlicenseterms 3

12. Finally the last command we run is to prepare the domain, note in a multi domain environment there is the option to use /PrepareAllDomains:

Setup.exe /PrepareDomain /IAcceptExchangeServerLicenseTerms

exchange 2013 sp1 setup preparedomain iacceptexchangeserverlicenseterms 1exchange 2013 sp1 setup preparedomain iacceptexchangeserverlicenseterms 2

13. Now type Setup.exe and hit return. Click Next to check for updates.

Exchange 2013 Setup - Check for Updates14. Click Next.

Exchange 2013 Setup - No updates found

Exchange 2013 Setup - Copying Files

Exchange 2013 Setup - Initializing Setup15. Read the Introduction and click Next.

Exchange 2013 Setup - Introduction16. Accept the license agreement and click Next.

Exchange 2013 Setup - License Agreement17. User recommended settings, click Next.

Exchange 2013 Setup - Recommended Settings18. Select the server roles you require, for my test lab I am selecting Mailbox and  the Client Access roles, click Next.

Exchange 2013 Setup - Server Role Selection19. Choose a location to install, click Next.

Exchange 2013 Setup - Installation Space and Location20. Choose Malware settings and click Next.

Exchange 2013 Setup - Malware Protection Settings21. Once the readiness check has completed click Install.

Exchange 2013 Setup - Readiness Checks

Exchange 2013 Setup - Setu Progress

22. Tick the box to Launch the Exchange Administration Center, and click Finish. Exchange 2013 Setup - Setup CompleteYou have successfully installed Exchange 2013.

Now Exchange 2013 has been installed check out the multi-part series on Exchange 2013 Initial Configuration Settings.

Source Material:

  1. Technet Exchange 2013 Prerequisites
  2. Technet Exchange 2013 System Requirements
  3. Released: Exchange Server 2013 Service Pack 1
  4. Technet Prepare Active Directory and Domains
  5. The Exchange Guy – Preparing Active Directory and Schema for Exchange 2013 Release Preview

Related Posts:

1. Exchange 2013 Initial Configuration Settings multi-part series

2. Exchange Server and Update Rollups Build Numbers

3. Exchange 2010 SP3 Update Rollup 12 released and installation tips

4. Exchange PowerShell: How to list all SMTP email addresses in Exchange

5. Exchange PowerShell: How to enumerate Distribution Lists, managers and members

Windows Deployment Server Modifications Syslinux

To make this possible, alter WDS to serve up a PXELinux menu with options to either proceed with WDS or jump over to a Linux PXE server:

  • Download Syslinux 3.86and extract to a temporary location
  • Copy the following three files directly to your WDS x64 boot directory, e.g., D:\RemoteInstall\Boot\x64\
    • core\pxelinux.0
    • modules\pxechain.com
    • com32\menu\menu.c32
  • Make duplicate copies of these existing WDS files (should already be present in directory above); they need to have “zero” as the extension
    • n12 -> pxeboot.0
    • com -> abortpxe.0
  • Create a directory in x64 named “pxelinux.cfg”
  • Create a new text file: x64\pxelinux.cfg\default with the following as a guide:

DEFAULT menu.c32





MENU LABEL Windows Deployment Services

KERNEL pxeboot.0


LABEL abort


Kernel   abortpxe.0


LABEL linuxpxe

MENU LABEL Linux PXE server…

KERNEL pxechain.com


#IP address above is Linux PXE host

To activate, run these two commands from a command prompt on the WDS server:

wdsutil /set-server /bootprogram:boot\x64\pxelinux.0 /architecture:x64

wdsutil /set-server /N12bootprogram:boot\x64\pxelinux.0 /architecture:x64


Boot a machine from the network and you should get a PXELinux menu that offers a choice:

One other note: the Linux PXE server doesn’t actually need to be on the same network, it just needs to be reachable from the client.















Step Two – Install PXELinux

PXELinux is part of the SysLinux package

  • ZIP\core\pxelinux.0
  • ZIP\com32\menu\vesamenu.c32
  • ZIP\com32\modules\chain.c32
  • Copy the files into \\WDS\REMINST\Boot\x86
  • Rename pxelinux.0 to pxelinux.com
  • You also need to make copies of two original WDS files in this folder
  • Copy n12and rename it to pxeboot.0
  • Copy com and rename it to abortpxe.0
  • Create to new subfolders
  • \\WDS\REMINST\Boot\x86\Linux
  • \\WDS\REMINST\Boot\x86\pxelinux.cfg
  • The pxelinux.cfgfolder is where you store the files that make up the PXE boot (F12) menu.
  • All the files we will put in there are text files, even though they don’t use a .txtextension
  • First create a new text file called default.  This is the first menu that loads.
  • Paste the following text into it
  • DEFAULT c32
  • PROMPT 0
  • MENU TITLE PXE Boot Menu (x86)
  • MENU INCLUDE pxelinux.cfg/graphics.conf
  • MENU AUTOBOOT Starting Local System in 8 seconds
  • # Option 1 – Exit PXE Linux & boot normally
  • LABEL bootlocal
  • menu label ^Boot Normally
  • menu default
  • localboot 0
  • timeout 80
  • # Option 2 – Run WDS
  • LABEL wds
  •  MENU LABEL ^Windows Deployment Services
  •  KERNEL pxeboot.0
  • # Option 3 – Exit PXE Linux
  • LABEL Abort
  • MENU LABEL E^xit
  • KERNEL abortpxe.0
  • Now create a text file called graphics.conf
  • This file controls how the menu is displayed. It’s very versatile so have a play around until it looks as basic or as flashy as you like
  • Paste the following text into it
  • MENU ROWS 16
  • MENU COLOR BORDER 30;44 #00000000 #00000000 none
  • MENU COLOR SCROLLBAR 30;44 #00000000 #00000000 none
  • MENU COLOR TITLE 0 #00269B #00000000 none
  • MENU COLOR SEL 30;47          #40000000 #20ffffff
  • MENU BACKGROUND background.jpg
  • If you want to use a custom background, place it in the \\WDS\REMINST\Boot\x86folder.
  • The image should be a 640×480 jpeg file.
  • Make sure it has the same name as specified in the MENU BACKGROUND line in conf
  • Now we need to change the default boot program in WDS
  • Open the Windows Deployment Services Console
  • Right Click on your Server and select Properties
  • From the Boottab change the default boot program for x86 architecture to\Boot\x86\pxelinux.com
  • In Server 2008 R2 you have to use the wdsutil  command lineto set the the default boot program with these commands

wdsutil /set-server /bootprogram:boot\x86\pxelinux.com /architecture:x86

wdsutil /set-server /N12bootprogram:boot\x86\pxelinux.com /architecture:x86

Step Three – Test it out

Before you go any further, do a test PXE boot to check everything is OK.

I use a Hyper-V VM to make this testing process quicker. Just make sure it’s set to boot to a legacy network adapter in the settings

If it doesn’t load make sure you have the following files and folders in the right place within the\\WDS\REMINST share

  • \Boot\x86\pxelinux.com
  • \Boot\x86\vesamenu.c32
  • \Boot\x86\chain.c32
  • \Boot\x86\pxeboot.0
  • \Boot\x86\abortpxe.0
  • \Boot\x86\background.jpg
  • \Boot\x86\Linux\
  • \Boot\x86\pxelinux.cfg\
  • \Boot\x86\pxelinux.cfg\default
  • \Boot\x86\pxelinux.cfg\graphics.conf

Step Four– Add new boot options

If you can boot into the new menu and still load WDS then we are ready to add our Linux distros and other tools. If not, go back to step one and check everything.

This stage is relatively easy. It is just a case of putting the relevant netboot files for your preferred distribution in to the \Boot\x86\Linux folder and then adding a menu option for them. You can find more info on where to get these from on the official WDSLinux wiki. I’ll show you a more generic way of doing things using Debian as an example

  • Create a new subfolder
  • \Boot\x86\Linux\Debian\
  • Download the netboot files (initrd.gz and linux) from a Debian mirror
  • Copy them into the Debian subfolder
  • Create a menu entry for them in \Boot\x86\pxelinux.cfg\default
  • LABEL debian6Netinstall
  • menu label ^Debian 6-0 Net-install
  • # Load the correct kernel
  • kernel /Linux/Debian/Linux
  • # Boot options
  • append priority=low vga=normal initrd=/Linux/Debian/initrd.gz

That’s all there is to it. As long as you download the correct files and boot the correct boot options on the append line of the menu, you should be OK.

What if I need 64-bit options

This is easy too just replicate everything we did in \Boot\x86 into \Boot\x64. Don’t forget to change the WDS Server boot program for the x64 architecture (as shown in step 2) to \Boot\x64\pxelinux.com

Taking it further

Hopefully you are reading this because you have numerous ideas of what you could boot to. To help you along I’ve included my current default menu as well as adding sub-menus for Linux and Tools. Most of them were fairly straightforward as they had special PXE version with instructions on their website, e.g. GParted & Clonezilla


DEFAULT      vesamenu.c32

PROMPT       0


MENU TITLE PXE Boot Menu (x86)

MENU INCLUDE pxelinux.cfg/graphics.conf

MENU AUTOBOOT Starting Local System in 8 seconds


# Option 1 – Exit PXE Linux & boot normally

LABEL bootlocal

menu label ^Boot Normally

menu default

localboot 0

timeout 80


# Option 2 – Run WDS


MENU LABEL ^Windows Deployment Services

KERNEL pxeboot.0

# Go to Linux sub-menu

LABEL linux

MENU LABEL ^Linux Distros

KERNEL vesamenu.c32

APPEND pxelinux.cfg/graphics.conf pxelinux.cfg/linux.menu

# Go to Tools sub-menu

LABEL tools


KERNEL vesamenu.c32

APPEND pxelinux.cfg/graphics.conf pxelinux.cfg/tools.menu

# Exit PXE Linux



KERNEL abortpxe.0


linux.menu (save in same place default)

MENU TITLE Install a Linux Distro


LABEL debian6.0-amd64-Netinstall

menu label ^Debian 6-0 amd64-Net-install:

kernel /Linux/Debian-Net-Install-amd64/Linux

append priority=low vga=normal initrd=/Linux/Debian-Net-Install-amd64/initrd.gz


LABEL Centos5.0-Install

menu label ^Centos 5-0 32bit install:

kernel /Linux/Centos-5.0-32-bit/vmlinuz

APPEND ks initrd=Linux/Centos-5.0-32-bit/initrd.img ramdisk_size=100000


LABEL Debian-5.08-Installer

menu label ^Install 5.08 (Lenny)

kernel /Linux/debian-installer/i386/linux

append vga=normal debian-installer/allow_unauthenticated=true  initrd=/Linux/debian-installer/i386/initrd.gz


LABEL Main Menu

MENU LABEL ^Back to Main Menu

KERNEL vesamenu.c32

APPEND pxelinux.cfg/default

tools.menu (save in same place default)



LABEL memtest

menu label ^Memory Test: Memtest86+ v4.20

kernel \Linux\memtest\memtestp

LABEL Clonezilla Live

MENU LABEL ^Clonezilla Live

kernel \Linux\Clonezilla\vmlinuz

append initrd=\Linux\Clonezilla\initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run=”ocs-live-general” ocs_live_extra_param=”” ocs_live_keymap=”” ocs_live_batch=”no” ocs_lang=”” vga=788 nosplash fetch=

LABEL gparted

MENU LABEL ^GParted Live

kernel \Linux\gparted\vmlinuz

append initrd=\Linux\gparted\initrd.img boot=live config  noswap noprompt  nosplash  fetch=


GParted live version: 0.8.1-3. Live version maintainer: Steven Shiau

Disclaimer: GParted live comes with ABSOLUTELY NO WARRANTY


LABEL Main Menu

MENU LABEL ^Back to Main Menu

KERNEL vesamenu.c32

APPEND pxelinux.cfg/default



Routing Protocols


Routing Protocol Selection Guide – IGRP, EIGRP, OSPF, IS-IS, BGP


The purpose of routing protocols is to learn of available routes that exist on the enterprise network, build routing tables and make routing decisions. Some of the most common routing protocols include IGRP, EIGRP, OSPF, IS-IS and BGP. There are two primary routing protocol types although many different routing protocols defined with those two types. Link state and distance vector protocols comprise the primary types. Distance vector protocols advertise their routing table to all directly connected neighbors at regular frequent intervals using a lot of bandwidth and are slow to converge. When a route becomes unavailable, all router tables must be updated with that new information. The problem is with each router having to advertise that new information to its neighbors, it takes a long time for all routers to have a current accurate view of the network. Distance vector protocols use fixed length subnet masks which aren’t scalable. Link state protocols advertise routing updates only when they occur which uses bandwidth more effectively. Routers don’t advertise the routing table which makes convergence faster. The routing protocol will flood the network with link state advertisements to all neighbor routers per area in an attempt to converge the network with new route information. The incremental change is all that is advertised to all routers as a multicast LSA update. They use variable length subnet masks, which are scalable and use addressing more efficiently.

Interior Gateway Routing Protocol (IGRP):

Interior Gateway Routing Protocol is a distance vector routing protocol developed by Cisco systems for routing multiple protocols across small and medium sized Cisco networks. It is proprietary which requires that you use Cisco routers. This contrasts with IP RIP and IPX RIP, which are designed for multi-vendor networks. IGRP will route IP, IPX, Decnet and AppleTalk which makes it very versatile for clients running many different protocols. It is somewhat more scalable than RIP since it supports a hop count of 100, only advertises every 90 seconds and uses a composite of five different metrics to select a best path destination. Note that since IGRP advertises less frequently, it uses less bandwidth than RIP but converges much slower since it is 90 seconds before IGRP routers are aware of network topology changes. IGRP does recognize assignment of different autonomous systems and automatically summarizes at network class boundaries. As well there is the option to load balance traffic across equal or unequal metric cost paths.


  • Distance Vector
  • Routes IP, IPX, Decnet, Appletalk
  • Routing Table Advertisements Every 90 Seconds
  • Metric: Bandwidth, Delay, Reliability, Load, MTU Size
  • Hop Count: 100
  • Fixed Length Subnet Masks
  • Summarization on Network Class Address
  • Load Balancing Across 6 Equal or Unequal Cost Paths ( IOS 11.0 )
  • Update Timer: 90 seconds
  • Invalid Timer: 270 seconds
  • Holddown Timer: 280 seconds
  • Metric Calculation = destination path minimum BW * delay (usec)
  • Split Horizon

Enhanced Interior Gateway Routing Protocol (EIGRP):

Enhanced Interior Gateway Routing Protocol is a hybrid routing protocol developed by Cisco systems for routing many protocols across an enterprise Cisco network. It has characteristics of both distance vector routing protocols and link state routing protocols. It is proprietary which requires that you use Cisco routers. EIGRP will route the same protocols that IGRP routes (IP, IPX, Decnet and Appletalk) and use the same composite metrics as IGRP to select a best path destination. As well there is the option to load balance traffic across equal or unequal metric cost paths. Summarization is automatic at a network class address however it can be configured to summarize at subnet boundaries as well. Redistribution between IGRP and EIGRP is automatic as well. There is support for a hop count of 255 and variable length subnet masks.


Convergence with EIGRP is faster since it uses an algorithm called dual update algorithm or DUAL, which is run when a router detects that a particular route is unavailable. The router queries its neighbors looking for a feasible successor. That is defined as a neighbor with a least cost route to a particular destination that doesn’t cause any routing loops. EIGRP will update its routing table with the new route and the associated metric. Route changes are advertised only to affected routers when changes occur. That utilizes bandwidth more efficiently than distance vector routing protocols.

Autonomous Systems:

EIGRP does recognize assignment of different autonomous systems which are processes running under the same administrative routing domain. Assigning different autonomous system numbers isn’t for defining a backbone such as with OSPF. With IGRP and EIGRP it is used to change route redistribution, filtering and summarization points.


  • Advanced Distance Vector
  • Routes IP, IPX, Decnet, Appletalk
  • Routing Advertisements: Partial When Route Changes Occur
  • Metrics: Bandwidth, Delay, Reliability, Load, MTU Size
  • Hop Count: 255
  • Variable Length Subnet Masks
  • Summarization on Network Class Address or Subnet Boundary
  • Load Balancing Across 6 Equal or Unequal Cost Paths (IOS 11.0)
  • Hello Timer: 1 second on Ethernet / 60 seconds on Non-Broadcast
  • Holddown Timer: 3 seconds on Ethernet / 180 seconds on Non-Broadcast
  • Metric Calculation = destination path minimum BW * delay (msec) * 25
  • Bidirectional Forwarding Detection (BFD) Support
  • Split Horizon
  • LSA Multicast Address:

Open Shortest Path First (OSPF)

Open Shortest Path First is a true link state protocol developed as an open standard for routing IP across large multi-vendor networks. A link state protocol will send link state advertisements to all connected neighbors of the same area to communicate route information. Each OSPF enabled router, when started, will send hello packets to all directly connected OSPF routers. The hello packets contain information such as router timers, router ID and subnet mask. If the routers agree on the information they become OSPF neighbors. Once routers become neighbors they establish adjacencies by exchanging link state databases. Routers on point-to-point and point-to-multipoint links (as specified with the OSPF interface type setting) automatically establish adjacencies. Routers with OSPF interfaces configured as broadcast (Ethernet) and NBMA (Frame Relay) will use a designated router that establishes those adjacencies.


OSPF uses a hierarchy with assigned areas that connect to a core backbone of routers. Each area is defined by one or more routers that have established adjacencies. OSPF has defined backbone area 0, stub areas, not-so-stubby areas and totally stubby areas. Area 0 is built with a group of routers connected at a designated office or by WAN links across several offices. It is preferable to have all area 0 routers connected with a full mesh using an Ethernet segment at a core office. This provides for high performance and prevents partitioning of the area should a router connection fail. Area 0 is a transit area for all traffic from attached areas. Any inter-area traffic must route through area 0 first. Stub areas use a default route injected from the ABR to forward traffic destined for any external routes (LSA 5,7) to the area border router. Inter-area (LSA 3,4) and intra-area (LSA 1,2) routing is as usual. Totally Stubby areas are a Cisco specification that uses a default route injected from the ABR for all Inter-area and external routes. The Totally Stubby area doesn’t advertise or receive external or Inter-area LSA’s. The Not-So-Stubby area ABR is a transit area that will import external routes with type 7 LSA and flood them to other areas as type 5 LSA. External routes aren’t received at that area type. Inter-area and intra-area routing is as usual. OSPF defines internal routers, backbone routers, area border routers (ABR) and autonomous system boundary routers (ASBR). Internal routers are specific to one area. Area border routers have interfaces that are assigned to more than one area such as area 0 and area 10. An autonomous system boundary router has interfaces assigned to OSPF and a different routing protocol such as EIGRP or BGP. A virtual link is utilized when an area doesn’t have a direct connection to area 0. A virtual link is established between an area border router for an area that isn’t connected to area 0, and an area border router for an area that is connected to area 0. Area design involves considering geographical location of offices and traffic flows across the enterprise. It is important to be able to summarize addresses for many offices per area and minimize broadcast traffic.


Fast convergence is accomplished with the SPF (Dijkstra) algorithm which determines a shortest path from source to destination. The routing table is built from running SPF which determines all routes from neighbor routers. Since each OSPF router has a copy of the topology database and routing table for its particular area, any route changes are detected faster than with distance vector protocols and alternate routes are determined.

Designated Router:

Broadcast networks such as Ethernet and Non-Broadcast Multi Access networks such as Frame Relay have a designated router (DR) and a backup designated router (BDR) that are elected. Designated routers establish adjacencies with all routers on that network segment. This is to reduce broadcasts from all routers sending regular hello packets to its neighbors. The DR sends multicast packets to all routers that it has established adjacencies with. If the DR fails, it is the BDR that sends multicasts to specific routers. Each router is assigned a router ID, which is the highest assigned IP address on a working interface. OSPF uses the router ID (RID) for all routing processes.


  • Link State
  • Routes IP
  • Routing Advertisements: Partial When Route Changes Occur
  • Metric: Composite Cost of each router to Destination (100,000,000/interface speed)
  • Hop Count: None (Limited by Network)
  • Variable Length Subnet Masks
  • Summarization on Network Class Address or Subnet Boundary
  • Load Balancing Across 4 Equal Cost Paths
  • Router Types: Internal, Backbone, ABR, ASBR
  • Area Types: Backbone, Stubby, Not-So-Stubby, Totally Stubby
  • LSA Types: Intra-Area (1,2) Inter-Area (3,4), External (5,7)
  • Fast Hello Timer Interval: 250 msec. for Ethernet, 30 seconds for Non-Broadcast
  • Dead Timer Interval: 1 second for Ethernet, 120 seconds for Non-Broadcast
  • Bidirectional Forwarding Detection (BFD) Support
  • LSA Multicast Address: and (DR/BDR) Don’t Filter!
  • Interface Types: Point to Point, Broadcast, Non-Broadcast, Point to Multipoint, Loopback

Integrated IS-IS:

Integrated Intermediate System – Intermediate System routing protocol is a link state protocol similar to OSPF that is used with large enterprise and ISP customers. An intermediate system is a router and IS-IS is the routing protocol that routes packets between intermediate systems. IS-IS utilizes a link state database and runs the SPF Dijkstra algorithm to select shortest paths routes. Neighbor routers on point to point and point to multipoint links establish adjacencies by sending hello packets and exchanging link state databases. IS-IS routers on broadcast and NBMA networks select a designated router that establishes adjacencies with all neighbor routers on that network. The designated router and each neighbor router will establish an adjacency with all neighbor routers by multicasting link state advertisements to the network itself. That is different from OSPF, which establishes adjacencies between the DR and each neighbor router only. IS-IS uses a hierarchical area structure with level 1 and level 2 router types. Level 1 routers are similar to OSPF intra-area routers, which have no direct connections outside of its area. Level 2 routers comprise the backbone area which connects different areas similar to OSPF area 0. With IS-IS a router can be an L1/L2 router which is like an OSPF area border router (ABR) which has connections with its area and the backbone area. The difference with IS-IS is that the links between routers comprise the area borders and not the router. Each IS-IS router must have an assigned address that is unique for that routing domain. An address format is used which is comprised of an area ID and a system ID. The area ID is the assigned area number and the system ID is a MAC address from one of the router interfaces. There is support for variable length subnet masks, which is standard with all link state protocols. Note that IS-IS assigns the routing process to an interface instead of a network.


  • Link State
  • Routes IP, CLNS
  • Routing Advertisements: Partial When Routing Changes Occur
  • Metric: Variable Cost (default cost 10 assigned to each interface)
  • Hop Count: None (limited by network)
  • Variable Length Subnet Masks
  • Summarization on Network Class Address or Subnet Boundary
  • Load Balancing Across 6 Equal Cost Paths
  • Hello Timer Interval: 10 seconds
  • Dead Timer Interval: 30 seconds
  • Area Types: Hierarchical Topology similar to OSPF
  • Router Types: Level 1 and Level 2
  • LSP Types: Internal L1 and L2, External L2
  • Designated Router Election, No BDR
  • Bidirectional Forwarding Detection (BFD) Support

Border Gateway Protocol (BGP):

Border Gateway Protocol is an exterior gateway protocol, which is different from the interior gateway protocols discussed so far. The distinction is important since the term autonomous system is used somewhat differently with protocols such as EIGRP than it is with BGP. Exterior gateway protocols such as BGP route between autonomous systems, which are assigned a particular AS number. AS numbers can be assigned to an office with one or several BGP routers. The BGP routing table is comprised of destination IP addresses, an associated AS-Path to reach that destination and a next hop router address. The AS-Path is a collection of AS numbers that represent each office involved with routing packets. Contrast that with EIGRP, which uses autonomous systems as well. The difference is their autonomous systems refer to a logical grouping of routers within the same administrative system. An EIGRP network can configure many autonomous systems. They are all managed by the company for defining route summarization, redistribution and filtering. BGP is utilized a lot by Internet Service Providers (ISP) and large enterprise companies that have dual homed internet connections with single or dual routers homed to the same or different Internet Service Providers. BGP will route packets across an ISP network, which is a separate routing domain that is managed by them. The ISP has its own assigned AS number, which is assigned by InterNIC. New customers can either request an AS assignment for their office from the ISP or InterNIC. A unique AS number assignment is required for customers when they connect using BGP. There are 10 defined attributes that have a particular order or sequence, which BGP utilizes as metrics to determine the best path to a destination. Companies with only one circuit connection to an ISP will implement a default route at their router, which forwards any packets that are destined for an external network. BGP routers will redistribute routing information (peering) with all IGP routers on the network (EIGRP, RIP, OSPF etc) which involve exchange of full routing tables. Once that is finished, incremental updates are sent with topology changes. The BGP default keepalive timer is 60 seconds while the holddown timer is 180 seconds. Each BGP router can be configured to filter routing broadcasts with route maps instead of sending/receiving the entire internet routing table.


  • Path Vector
  • Routes IP
  • Routing Advertisements: Partial When Route Changes Occur
  • Metrics: Weight, Local Preference, Local Originated, As Path, Origin Type, MED
  • Hop Count: 255
  • Variable Length Subnet Masks
  • Summarization on Network Class Address or Subnet Boundary
  • Load Balancing Across 6 Equal Cost Paths
  • Keepalive Timer: 60 seconds
  • Holddown Timer: 180 seconds
  • Bidirectional Forwarding Detection (BFD) Support
  • Designated Router: Route Reflector

BGP Routing Table Components:

  • Destination IP Address / Subnet Mask
  • AS-Path
  • Next Hop IP Address

All about processors in linux !!

difference between Physical CPU ,CPU cores , logical CPU’s
info about available processors , free sockets etc on a “Red Hat OS”

Note: convention used

commands are shown starting with “#”in bold  description/results of a command are shown as “##” in italics

something like this

#command    ##description/result  of a command

  1. physical processor : this is the processor that is physically seen on the mother board.
  2. cores : each physical processor may have number cores built into it (These are number of physical cores available )
  3. logical cores  : These are the number of processors seen by the OS/Kernel each core can work as more than 1 logical processor if hyper threading is enabled  , and each logical core can handle an instruction independently.
The following command will show how many active physical processors a system has. Example: If this number is 2, one could potentially open up the system chassis and remove 2 physical processors with one’s hands.
#cat /proc/cpuinfo | grep -i ‘physical id’ | sort -u | wc -l
2 physical cores
##sort -u(unique sort)  , wc -l returns the processor count
On a system with multi-core processors, the following command will report the number of CPU cores per physical processor (though in rare cases it might not). Example: If this number is 4 and physical CPUs is 2, then each of the 2 physical processors has 4 CPU cores, leading to a total of 8 cores.
#cat /proc/cpuinfo | grep -i cpu.cores  | sort -u

cpu cores       : 6


each physical processor has 6 cores.
considering the above example ( 2 physical processors have a total of 12 cores )
The below command will show the total number of logical processor seen by the linux kernel , This number is most important as this shows the actual number of cores seen by the OS , this is further the number of cpu’s that can work independently on any given instruction . if the number cpu cores and number of logical cores are same then there is no hyper threading enabled
#cat /proc/cpuinfo | grep -i processor | wc -l
24 logical cores
( as per above example 12 core processor is seen in OS as 24 cpu’s meaning each core is seen as 2 logical processors no of threads is 2 ).
2> How to determine number of CPU sockets
#dmidecode -t4 | grep Socket.Designation: | wc -l
# lstopo –whole-system  –only socket  | wc -l
( OR )
one simple command reveals most of the information you are looking for
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                24                                                      ##  logical cores
On-line CPU(s) list:   0-23
Thread(s) per core:    2                                                ## Threads
Core(s) per socket:    6                                                ## CORES
Socket(s):             2                                                     ## Sockets
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 44
Stepping:              2
CPU MHz:               1600.000
BogoMIPS:              6133.27
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              12288K
NUMA node0 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23
NUMA node1 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22


Kerberos Packeting.


The client authenticates itself to the Authentication Server (AS) which forwards the username to a key distribution center (KDC).
The KDC issues a ticket-granting ticket (TGT), which is time stamped, encrypts it using the user’s password and returns the encrypted result to the user’s workstation.
This is done infrequently, typically at user logon; the TGT expires at some point, though may be transparently renewed by the user’s session manager while they are logged in.

When the client needs to communicate with another node (“principal” in Kerberos parlance) the client sends the TGT to the ticket-granting service (TGS),
which usually shares the same host as the KDC. After verifying the TGT is valid and the user is permitted to access the requested service, the TGS issues a ticket and session keys,
which are returned to the client. The client then sends the ticket to the service server (SS) along with its service request.
Kerberos negotiations
The protocol is described in detail below.

User Client-based Logon
A user enters a username and password on the client machines. Other credential mechanisms like pkinit (RFC4556) allow for the use of public keys in place of a password.
The client transforms the password into the key of a symmetric cipher. This either uses the built in key scheduling or a one-way hash depending on the cipher-suite used.
Client Authentication

The client sends a cleartext message of the user ID to the AS (Authentication Server) requesting services on behalf of the user.
(Note: Neither the secret key nor the password is sent to the AS.) The AS generates the secret key by hashing the password of the user found at the database
(e.g., Active Directory in Windows Server).

The AS checks to see if the client is in its database. If it is, the AS sends back the following two messages to the client:

Message A: Client/TGS Session Key encrypted using the secret key of the client/user.

Message B: Ticket-Granting-Ticket (TGT,which includes the client ID, client network address, ticket validity period, and the client/TGS session key)
encrypted using the secret key of the TGS.
Once the client receives messages A and B, it attempts to decrypt message A with the secret key generated from the password entered by the user.
If the user entered password does not match the password in the AS database, the client’s secret key will be different and thus unable to decrypt message A.
With a valid password and secret key the client decrypts message A to obtain the Client/TGS Session Key.
This session key is used for further communications with the TGS. (Note: The client cannot decrypt Message B, as it is encrypted using TGS’s secret key.)
At this point, the client has enough information to authenticate itself to the TGS.

Client Service Authorization

When requesting services, the client sends the following messages to the TGS:

Message C: Composed of the TGT from message B and the ID of the requested service.

Message D: Authenticator (which is composed of the client ID and the timestamp), encrypted using the Client/TGS Session Key.
Upon receiving messages C and D, the TGS retrieves message B out of message C. It decrypts message B using the TGS secret key.
This gives it the “client/TGS session key”. Using this key, the TGS decrypts message D (Authenticator) and sends the following two messages to the client:

Message E: Client-to-server ticket (which includes the client ID, client network address, validity period and Client/Server Session Key) encrypted using
the service’s secret key.

Message F: Client/Server Session Key encrypted with the Client/TGS Session Key.

Client Service Request

Upon receiving messages E and F from TGS, the client has enough information to authenticate itself to the SS.
The client connects to the SS and sends the following two messages:

Message E from the previous step (the client-to-server ticket, encrypted using service’s secret key).

Message G: a new Authenticator, which includes the client ID, timestamp and is encrypted using Client/Server Session Key.
The SS decrypts the ticket using its own secret key to retrieve the Client/Server Session Key. Using the sessions key,
SS decrypts the Authenticator and sends the following message to the client to confirm its true identity and willingness to serve the client:

Message H: the time stamp found in client’s Authenticator , encrypted using the Client/Server Session Key.
The client decrypts the confirmation using the Client/Server Session Key and checks whether the times tamp is correct. If so, then the client can trust the server
and can start issuing service requests to the server.
The server provides the requested services to the client.

How to activate extended RAM in RedHat vmware guest

When the RAM in extended in a Red Hat OS  the new extended memory may not me seen in the OS , the below steps explain how to make newly  extended RAM accessible by the OS without a reboot.


Note: convention used

commands are shown starting with “#”in bold  description/results of a command are shown as “##” in italics

something like this

#command    ##description/result  of a command


listing available memory blocks and their state

#grep line /sys/devices/system/memory/*/state



a few memory blocks which shows up as offline indicates newly extended RAM, we have to activate it for OS to recognize and use it.

echo “online” into the files which are shown as offline.
#echo online > /sys/devices/system/memory/memory9/state
it doesn’t hurt to do it on all the files also which can be done by a simple script

# ls -1 /sys/devices/system/memory/*/state > /tmp/file.txt ## will dump each file name per line in file.txt
# for i in /tmp/file.txt ; do echo online > $i; done
#free -m ##now newly added memory can be seen