Discussion:
NetApp VS EMC
(too old to reply)
Kiman,Jang
2005-07-12 05:15:22 UTC
Permalink
Hi,
I'm considering buy NAS Storage.
Two bender submitted NetApp FAS940C, EMC NS702G.
I will use 2TB SAN and 4TB NAS Storage.
SAN Storage will attach Oracle DBMS and NAS Storage will attach Windows
NT and UNIX.
please recommed best storage!
David A.Lethe
2005-07-12 05:26:09 UTC
Permalink
Post by Kiman,Jang
Hi,
I'm considering buy NAS Storage.
Two bender submitted NetApp FAS940C, EMC NS702G.
I will use 2TB SAN and 4TB NAS Storage.
SAN Storage will attach Oracle DBMS and NAS Storage will attach Windows
NT and UNIX.
please recommed best storage!
How do you measure "best"??
- price, performance, availability, TCO, MTBF, backup/recovery
downtime, application certs, 24x7 support, installation/config
services, etc ...
Kiman,Jang
2005-07-12 05:36:56 UTC
Permalink
Post by Kiman,Jang
Hi,
I'm considering buy NAS Storage.
Two bender submitted NetApp FAS940C, EMC NS702G.
I will use 2TB SAN and 4TB NAS Storage.
SAN Storage will attach Oracle DBMS and NAS Storage will attach Windows
NT and UNIX.
please recommed best storage!
Price : $200000
important characters is performance and application running (ORACLE
DBMS) and File share.
Faeandar
2005-07-13 15:20:15 UTC
Permalink
Post by Kiman,Jang
Post by Kiman,Jang
Hi,
I'm considering buy NAS Storage.
Two bender submitted NetApp FAS940C, EMC NS702G.
I will use 2TB SAN and 4TB NAS Storage.
SAN Storage will attach Oracle DBMS and NAS Storage will attach Windows
NT and UNIX.
please recommed best storage!
Price : $200000
important characters is performance and application running (ORACLE
DBMS) and File share.
NetApp is head and shoulders better than EMC when it comes to NAS.
And if Oracle is your key concern thenc onsider this, Oracle is
NetApp's single largest customer. That should tell you something.

~F
AWS
2005-07-15 16:03:49 UTC
Permalink
Post by Faeandar
NetApp is head and shoulders better than EMC when it comes to NAS.
You base this opinion on?
Post by Faeandar
And if Oracle is your key concern thenc onsider this, Oracle is
NetApp's single largest customer. That should tell you something.
Funny. That's straight from the NetApp sales pitch I heard a few years
ago... What does that say about NetApp? There *are* larger companies
than Oracle, you know. Also, Oracle spends a lot of money on EMC too,
and I seriously doubt they use NetApp for hosting their production
RDBMSes other than POC/Testing.

Very seldom is a single vendor the answer for everything. Try to base
your opinions/recommendations on facts rather than prejudices...

Aaron
HVB
2005-07-15 17:04:20 UTC
Permalink
Post by AWS
Post by Faeandar
And if Oracle is your key concern thenc onsider this, Oracle is
NetApp's single largest customer. That should tell you something.
Funny. That's straight from the NetApp sales pitch I heard a few years
ago... What does that say about NetApp?
It's still something NetApp like to boast about - which isn't
necessarily a bad thing.
Post by AWS
Also, Oracle spends a lot of money on EMC too,
and I seriously doubt they use NetApp for hosting their production
RDBMSes other than POC/Testing.
Yep... and there's this interesting EMC - Cisco - Oracle joint
marketing web site: http://www.eecostructure.com/

HVB.
carmelomcc
2005-07-15 18:02:50 UTC
Permalink
If you want perfomance and do not want to go to fiber channel netapp is
better then emc. If you can jump to fiber then i would look at HDS.
Faeandar
2005-07-15 21:36:35 UTC
Permalink
Post by AWS
Post by Faeandar
NetApp is head and shoulders better than EMC when it comes to NAS.
You base this opinion on?
On the OP's interest in NAS. EMC NAS performs poorly, has poor
availability, and has poor data integrity ala DART.

All this according to actual EMC NAS customers, of which I am not one
so this is not first hand. However postings, e-mail lists, and
personal contact with EMC NAS customers speak volumes. I've yet to
meet someone happy with EMC NAS. EMC SAN is a completely different
animal.
Post by AWS
Post by Faeandar
And if Oracle is your key concern thenc onsider this, Oracle is
NetApp's single largest customer. That should tell you something.
Funny. That's straight from the NetApp sales pitch I heard a few years
ago... What does that say about NetApp? There *are* larger companies
than Oracle, you know. Also, Oracle spends a lot of money on EMC too,
and I seriously doubt they use NetApp for hosting their production
RDBMSes other than POC/Testing.
Oracle employees have confirmed that they use alot of NetApp in alot
of production, though I personally cannot validate it.
Post by AWS
Very seldom is a single vendor the answer for everything. Try to base
your opinions/recommendations on facts rather than prejudices...
Never said they were. And though I am excruciatingly pro-NetApp I am
not blind to possibilities other than them. The simple truth for me
is that, for traditional NAS, nothing beats NetApp overall. When it
comes to SAN, I counsel against them. When it comes to blinding
performance, you have to go past traditional NAS, which means not
NetApp. Point is I can see the forrest for the trees.

And my opinions and recommendations are based on facts. Not all facts
matter to all people though.

~F
boatgeek
2005-07-26 20:48:31 UTC
Permalink
We've got both, around 26 netapp filers (120 TBs or so) and several EMC
Sans, DMX 2000s and clariion dx700s. For NAS ability, netapp can do
254 snapshots, which means that you can do a snapshot every hour for
10+ days. We currently do hourly snapshots every hour for a day, then
daily every day for a week, then weekly which we keep for four weeks.
This allows us to restore something instantly that someone was working
on that day with fairly good granularity, in other words they don't
loose more than an hour of work. It's used often. EMC snaps are
fairly limited, I think no more than 12 and the process isn't as useful
and degrades performance.

The netapp filers we have (960s) can do around 120 MB/sec and fairly
high I/O. This is significant. With exchange, oracle and sql you
can do an instantaneous snapshot and do it hot, involving really no
downtime and with little storage costs in terms of needing more disks.
EMC isn't as efficient for disk storage.

NetApp also has a consistant platform across all of it's storage, so
you can have anything and it will talk to everything else. EMC
clariions and symms are different animals, different codes, different
abilities.

The best thing about the EMC platform is it's N+1, meaning you
shouldn't have to take an outage to replace parts or upgrade firmware.
NetApp does need an outage window, though by configuring the NFS
timeout there can be no percieved outage from the NFS users, and CIFs
users auto reconnect. So NetApp outage is manageable.

I would choose EMC is the main requirement is that if never go down.
For feature rich services, netapp probably outshines EMC in the terms
of the protocols they support and ease of management. EMC is desperate
to get their NAS device going. They gave us two for free. Frankly,
we haven't turned them on because we don't see the advantage and it
would add management difficulty.

-Doug
Faeandar
2005-07-27 02:11:39 UTC
Permalink
Post by boatgeek
We've got both, around 26 netapp filers (120 TBs or so) and several EMC
Sans, DMX 2000s and clariion dx700s. For NAS ability, netapp can do
254 snapshots, which means that you can do a snapshot every hour for
10+ days. We currently do hourly snapshots every hour for a day, then
daily every day for a week, then weekly which we keep for four weeks.
This allows us to restore something instantly that someone was working
on that day with fairly good granularity, in other words they don't
loose more than an hour of work. It's used often. EMC snaps are
fairly limited, I think no more than 12 and the process isn't as useful
and degrades performance.
The netapp filers we have (960s) can do around 120 MB/sec and fairly
high I/O. This is significant. With exchange, oracle and sql you
can do an instantaneous snapshot and do it hot, involving really no
downtime and with little storage costs in terms of needing more disks.
EMC isn't as efficient for disk storage.
NetApp also has a consistant platform across all of it's storage, so
you can have anything and it will talk to everything else. EMC
clariions and symms are different animals, different codes, different
abilities.
The best thing about the EMC platform is it's N+1, meaning you
shouldn't have to take an outage to replace parts or upgrade firmware.
NetApp does need an outage window, though by configuring the NFS
timeout there can be no percieved outage from the NFS users, and CIFs
users auto reconnect. So NetApp outage is manageable.
Actually, most things NetApp can be upgraded on the fly if they're
clustered. We just upgraded shelf and disk firmware hot, not even a
failover is required for that these days. About the only thing that
really requires a downtime that I've experienced is a shelf
replacement. Even OS upgrades are not a downtime for NFS, CIFS may be
a different story.

Also, I've talked to an EMC NAS customer in my area and the last time
they did a code upgrade it required a 6 hour downtime. I don't know
the details but I know the admin was pissed out of his mind. They're
int he process of replacing them with NetApp. They weren't looking at
NetApp prior to that event.

if I sound anti-EMC it's because I am. Their NAS is actually bad so
no issue with that. The SAN hardware (DMX) is pretty rock solid stuff
from what I understand, but the way EMC does business just torks me.
Therefore I avoid them whenever possible.

However, anti-EMC is not the same as only-NetApp. For traditional NAS
NetApp is the best imo. For SAN find something else.

~F
Post by boatgeek
I would choose EMC is the main requirement is that if never go down.
For feature rich services, netapp probably outshines EMC in the terms
of the protocols they support and ease of management. EMC is desperate
to get their NAS device going. They gave us two for free. Frankly,
we haven't turned them on because we don't see the advantage and it
would add management difficulty.
-Doug
boatgeek
2005-07-28 21:48:15 UTC
Permalink
I am too a netapp fan, didn't want to make it sound otherwise. The
types of upgrades or errors which would cause an outage (from our
experience) are:

Upgrades of the heads (adding cards, swapping cards, replacing cards or
swapping the head itself).

Replacing almost anything on the disk shelf other than the disk itself
(LRC, Shelf faults with temp sensors as it's build into the physical
shelf, problems with the cables, or recabling the cables to allow
expansion). The reason is the potential of disk corruption and hot LRC
replacement is not support by netapp. You can probably try a failover,
replace it, then failback, but netapp won't certify it and you could
possibly panic the box.

Any diagnostic which requires a coredump (we've had to do it 4 times).
To do that you basically push the power button and it should
automatically generate one.

Certain upgrades of code (6.4 to 6.5) or going to 7.0 as it will
involve completely restructuring the disk array to take advantage of
flexvols (unless you try using something like a rainfinity or neopath
in band replication mechanish.)

Again, I don't want to appear as a netapp basher, because it's sooooooo
much better than Windows 2000 cluster servers sitting on a SAN, and
many of the types of outages could be risked by playing with the disk
timeouts of nfs timeouts, but that might not be the best solution
depending on the criticality of the data base and the possiblity of
data corruption. I think we've been treated too carefully by netapp
because we are a very large customer who've encountered lots of bugs
and they were worried. Now we need to do things that I'm sure you've
already done such as extending the nfs timeouts so we can do a routine
operating system upgrade without having to create an outage window.

Lots of companies such as sprint use netapp as a SAN, because their
ability to use snapshots with sql is vastly superior. The integration
of exchange using a filer as a san and snapmanager on the exchange
server is also very powerful and industry leading. So even for
certain SAN uses, NetApp might be a good choice. Though certainly as
a regular SAN for ordinary disks, it's just to complicated with netapp
vs. a very simple to configure and robust EMC or Hitachi array.

Also, netapp is integrating with spinnaker, and that should make the
units themselves virtualized behind a namespace, which will be great.


I love netapp's flexibility, and it's ability to make a simple DR plan
(which we actually had to use when we lost a building) or its ability
to retrieve brick level data from exchange, or do instant oracle hot
backups all far outweigh in the vast majority of cases the small amount
of time you would need to take for an outage in the cases you might
encounter such as the ones above.

Specifically we are looking at products like rainfinity to help
virtualize the storage systems under NIS and DFS and do inband
replication to move critical data off of systems which might need
hardware replaced to avoid the outage.

-Doug
Faeandar
2005-07-28 23:28:54 UTC
Permalink
Post by boatgeek
I am too a netapp fan, didn't want to make it sound otherwise. The
types of upgrades or errors which would cause an outage (from our
Upgrades of the heads (adding cards, swapping cards, replacing cards or
swapping the head itself).
Able to do almost all of this with cluster failover, minus a full head
swap (I think that's possible too but requires some serious prep
work).
Post by boatgeek
Replacing almost anything on the disk shelf other than the disk itself
(LRC, Shelf faults with temp sensors as it's build into the physical
shelf, problems with the cables, or recabling the cables to allow
expansion). The reason is the potential of disk corruption and hot LRC
replacement is not support by netapp. You can probably try a failover,
replace it, then failback, but netapp won't certify it and you could
possibly panic the box.
Temp sensors require downtime, just had to do one recently. It
sucked. LRC replacement is cold too, but ESH modules are hot
swappable so it depends on the vintage of your hardware I guess.
Recabling can be done on the fly (cluster failover) and expansions are
hot, assuming newer hardware.
Post by boatgeek
Any diagnostic which requires a coredump (we've had to do it 4 times).
To do that you basically push the power button and it should
automatically generate one.
Certain upgrades of code (6.4 to 6.5) or going to 7.0 as it will
involve completely restructuring the disk array to take advantage of
flexvols (unless you try using something like a rainfinity or neopath
in band replication mechanish.)
I count both of the above as reboots, well within NFS timeouts. In 6
years I've only seen one application that didn't deal with NFS timeout
retries well. Meaning a normal upgrade, or reboot, or manual core
dump are all non-disruptive to the clients. All this is NFS-based
mind you, CIFS gets dorked in all cases except cluster failover. And
fail-back only survives if you have newer hardware with Compact Flash
cards.
Post by boatgeek
Again, I don't want to appear as a netapp basher, because it's sooooooo
much better than Windows 2000 cluster servers sitting on a SAN, and
many of the types of outages could be risked by playing with the disk
timeouts of nfs timeouts, but that might not be the best solution
depending on the criticality of the data base and the possiblity of
data corruption. I think we've been treated too carefully by netapp
because we are a very large customer who've encountered lots of bugs
and they were worried. Now we need to do things that I'm sure you've
already done such as extending the nfs timeouts so we can do a routine
operating system upgrade without having to create an outage window.
Lots of companies such as sprint use netapp as a SAN, because their
ability to use snapshots with sql is vastly superior. The integration
of exchange using a filer as a san and snapmanager on the exchange
server is also very powerful and industry leading. So even for
certain SAN uses, NetApp might be a good choice. Though certainly as
a regular SAN for ordinary disks, it's just to complicated with netapp
vs. a very simple to configure and robust EMC or Hitachi array.
I actually counsel against NetApp as block based storage. It's
designed as a file server and excels in that arena, it is not designed
as block based storage (for clients that is). That being said,
there's almost no reason Oracle (or most db's) require block access
these days. Plenty of companies, including mine, run Oracle over NFS
and it works fantastically. Oracle over NFS on NetApp is, as unbiased
as possible, far superior to running it with any other block based
storage.

Snapshots - true block delta changes based on file
Snapmirror/SnapVault - stupidly easy replication for DR, test, dev,
etc..
SnapRestore - near instant recovery to any point in time snapshot
NFS - built in multi-write (and therefore clusterable) file system

We're running Oracle 10G RAC over NFS on a cluster pair of filers.
It's truly awesome.

The 5% of databases that truly need the ultimate in performance should
run on something like HDS, completely optimized for performance. The
other 95% could easily run over NFS with all the feature, functions,
and ease of management and recovery that the NetApp allows. Of course
if you need CIFS that's a different story.

You can probably tell I don't use CIFS much...
Post by boatgeek
Also, netapp is integrating with spinnaker, and that should make the
units themselves virtualized behind a namespace, which will be great.
I love netapp's flexibility, and it's ability to make a simple DR plan
(which we actually had to use when we lost a building) or its ability
to retrieve brick level data from exchange, or do instant oracle hot
backups all far outweigh in the vast majority of cases the small amount
of time you would need to take for an outage in the cases you might
encounter such as the ones above.
Specifically we are looking at products like rainfinity to help
virtualize the storage systems under NIS and DFS and do inband
replication to move critical data off of systems which might need
hardware replaced to avoid the outage.
-Doug
we've got Rainstorage in house, have had for a while now. I've used
them at 2 different jobs also. Nice product, great for what I'm
asking of it. Works almost all the time. ;-]

For CIFS it's not that good since you have to break the connections to
go inband. If you plan to stay in band then no problem, but I have a
hard time leaving a single gig linux box in front of my trunked gig
filer.

~F
carmelomcc
2005-07-29 11:57:20 UTC
Permalink
What type of Oracle Database is it? If you do allot of Random Read you
will need a SAN based solution. I have implemented a ton of raq
clusters and nothing in the end is faster then SAN. I would look at
the Clariion line or the HDS Lighting line. You can configure allot of
Raid 1/0 which will be the best thing you can do for a oracle DB. I
hate to say this... Any high transaction based DB needs fiber. To try
to do it on NAS is a wasted effort. It has been proven even with 10
GIG Ethernet you can not out perform a solid fiber channel solution.
Now with 4 GIG Fiber channel i would not even burn the cycles for nas.
boatgeek
2005-07-29 15:25:51 UTC
Permalink
The Clariion speed is roughly the speed of the netapp FAS960 (don't put
anything on them asking for more than 100 GBs/sec total or around
50-60k IOs/sec). Clariions are midtier storage in my mind vs the DMX.
We have netapps, clariions, hitachis and symms. We have around 250
oracle databases on 6 pairs of netapp filers. It's much easier to give
space to on the netapp, much easier. As to speed, the EMC DMX 2000
and 3000 is FAR greater speed than netapp, but frankly two of our
filers are putting around 120 GBs/Sec between them. The rest of our
entire switch infrastructure with 4 EMC SANs and 4 Hitachi SANs is
doing around 20 GBs/sec. The reason is the DBAs never know what the
throughput is going to be. The speed debate is often silly. DBAs,
or baby faced consultants with no realworld experience ask for the
fastest thing possible and run everyones budget into the ground. It's
like saying you want to commute from DC to NY and therefore if you had
a ferrari you could make the trip as fast as possible. When in fact,
the guy with the diesel jetta will be right behind you every step of
the way at a tenth of the cost because of traffic lights, and speed
limits and all sorts of real world things that stop performance for
almost ever touching the sky high limits of a DMX. So if you know you
have something that really really cranks out data, then definitely go
to a high end SAN like a DMX, but be prepared to pay 4 times the cost
of the netapp and spend 5 times as long trying to get it going.
Faeandar
2005-07-29 21:56:55 UTC
Permalink
Post by boatgeek
The Clariion speed is roughly the speed of the netapp FAS960 (don't put
anything on them asking for more than 100 GBs/sec total or around"
50-60k IOs/sec). Clariions are midtier storage in my mind vs the DMX.
We have netapps, clariions, hitachis and symms. We have around 250
oracle databases on 6 pairs of netapp filers. It's much easier to give
space to on the netapp, much easier. As to speed, the EMC DMX 2000
and 3000 is FAR greater speed than netapp, but frankly two of our
filers are putting around 120 GBs/Sec between them. The rest of our
entire switch infrastructure with 4 EMC SANs and 4 Hitachi SANs is
doing around 20 GBs/sec. The reason is the DBAs never know what the
throughput is going to be. The speed debate is often silly. DBAs,
or baby faced consultants with no realworld experience ask for the
fastest thing possible and run everyones budget into the ground. It's
like saying you want to commute from DC to NY and therefore if you had
a ferrari you could make the trip as fast as possible. When in fact,
the guy with the diesel jetta will be right behind you every step of
the way at a tenth of the cost because of traffic lights, and speed
limits and all sorts of real world things that stop performance for
almost ever touching the sky high limits of a DMX. So if you know you
have something that really really cranks out data, then definitely go
to a high end SAN like a DMX, but be prepared to pay 4 times the cost
of the netapp and spend 5 times as long trying to get it going.
I think that was a great summation, I'm gonna have to remember the NY
trip explanation.

For carmellomcc I can only say this; as I mentioned previously only
about 5% of *ALL* databases need transaction performance beyond what
NAS can do. So if you run in that 5% then by all means, get block
access with an HDS or EMC or IBM. But if you're like the other 95%
NAS has all the power you'll need *plus* features and functions and
manageability you will not find with current block access arrays.

"What type of Oracle Database is it? If you do allot of Random Read
you will need a SAN based solution. I have implemented a ton of raq
clusters and nothing in the end is faster then SAN."

Several items to address there.

1) 10G RAC (I assume 'raq' was a typo on your part) is what we've
implemented. We do random reads, but the definition of "alot" is
variable. The simple fact is most people who think they do alot of
random access really don't.
2) I never said NAS was faster than SAN, merely that the vast majority
of db's out there don't need the blazing performance they think they
do.
3) implementing RAC also adds overhead with a CFS on SAN. With NAS
you don't have any of that overhead or complexity. It's built into
the protocol.

A side note, if the db server is under heavy load from memory and cpu
usage NAS can actually be faster than SAN or DAS. The reason is that
the server only makes a single call to the driver for data, the actual
IO functions are offloaded to the NAS host. And when dealing with
db's under 6GB it's essentially a memory to memory transfer.

Several publicly available tests have been done showing that NFS, as a
protocol, is not slower than fiber channel. It's mostly the overhead
of ethernet, 70% use v. 90%.

~F

C Kim
2005-07-13 17:01:01 UTC
Permalink
I am wondering why your vendor pitched the netapp 940C? They have been
replaced by the more powerful 3050C and it should provide more value in
the long haul.

We are doing a similar comparison between a 3020C and NS502. We haven't
decided yet, but the feature set is pretty close. We are current netapp
shop, so Netapp has a slight advantage, however, EMC is pitching a good
product at an incredible price.

You said you were pitched the NS702G? that is the NAS gateway for EMC.
What are you using for the backend storage? Or is it actually the NS702
(NAS gateway with dual movers, cx700 backend storage). If you are going
to go SAN for the Oracle Database, you need a Fabric switch (Mcdata,
brocade, cisco) to connect the server to the storage (unless you are
considering ISCSI?)

One thing i can say about my netapp. It has never crashed on me. Ever.

Tell us more about your environment, your servers, etc
Continue reading on narkive:
Loading...