New pages

Jump to navigation Jump to search
New pages
Hide registered users | Show bots | Show redirects
(newest | oldest) View ( | older 50) (20 | 50 | 100 | 250 | 500)
  • 18:49, 15 August 2023PSU (hist | edit) ‎[406 bytes]Digimer (talk | contribs) (Created page with "{{header}} A '''P'''ower '''S'''upply '''U'''nit, or "PSU", is the device that converts mains power into the various voltages needed inside of a computer, switch or other electronic device. In Anvil! clusters, most devices have redundant power supplies. This allows a device to be powered by two different power sources so that the loss of one power rail doesn't cause the device to shut down. {{footer}}")
  • 18:44, 15 August 2023BMC (hist | edit) ‎[298 bytes]Digimer (talk | contribs) (Created page with "{{howto_header}} '''BMC''' is an acronym for "Baseboard Management Card", which is the physical circuit board to provides IPMI functionality to a server. Learn more: * [http://en.wikipedia.org/wiki/Baseboard_management_controller#Baseboard_management_controller BMC] on Wikipedia. {{footer}}")
  • 18:09, 15 August 2023OEM (hist | edit) ‎[171 bytes]Digimer (talk | contribs) (Created page with "{{header}} OEM is an acronym for '''O'''riginal '''E'''quipment '''M'''anufacturer. It generally refers to the company that built a physical device or widget. {{footer}}")
  • 18:06, 15 August 2023SAN (hist | edit) ‎[396 bytes]Digimer (talk | contribs) (Created page with "{{header}} SAN is an acronym for '''''S'''torage '''A'''rea '''N'''etwork''. It differs from NAS in that it makes it's disk space available to multiple servers at the block level. It generally uses many disk drives in an array using high-speed copper or fiber networking technologies and is generally fault tolerant. See: http://en.wikipedia.org/wiki/Storage_area_network<br /> {{footer}}")
  • 18:06, 15 August 2023Fencing (hist | edit) ‎[1,694 bytes]Digimer (talk | contribs) (Created page with "{{header}} In clustering, '''fence''' (also called 'stonith') refers to the action of removing a node from the cluster. A fence is carried out when the cluster software determines a node is faulty. Once this decision is made, the cluster software consults it's configuration for information on how to carry out the fence. The fence action is in turn carried out by a software or hardware action. The details of which depend on the fence method(s) configured for the node be...")
  • 18:05, 15 August 2023Quorum (hist | edit) ‎[1,804 bytes]Digimer (talk | contribs) (Created page with "{{header}} In clustering terms, '''quorum''' is synonymous with "majority". All nodes and quorum disks, when used, are assigned a number of votes. The cluster is then told how many votes to expect (the sum of all nodes plus the quorum disk). When a problem occurs that caused the cluster to split into two or more partitions, each partition will add up the votes of itself plus the devices it can talk to. If the resulting count is greater than half, that partition is deter...")
  • 18:04, 15 August 2023The 2-Node Myth (hist | edit) ‎[8,242 bytes]Digimer (talk | contribs) (Created page with "{{header}} A common argument in the availability world is "You need at least 3-nodes for availability clustering". This article aims to disprove that. To understand this argument we must first discuss two concepts in availability clustering; Quorum and fencing (also called 'stonith'). = Quorum = "Quorum" is a term used to define simple majority. Nodes in a cluster have a default value of '1'. Said mathematically, quorum is > 50%. When a cluster is quora...")
  • 23:46, 11 August 2023How To (hist | edit) ‎[51 bytes]Digimer (talk | contribs) (Created page with "{{howto_header}} How-to articles; * {{footer}}")
  • 21:16, 11 August 2023IPMI (hist | edit) ‎[27,359 bytes]Digimer (talk | contribs) (Created page with "{{howto_header}} IPMI is an acronym for '''I'''ntelligent '''P'''latform '''M'''anagement '''I'''nterface. This is a technology built into many server-grade mainboards. This is called the '''B'''aseboard '''M'''anagement '''C'''ontroller, or '''BMC'''. IPMI, via the BMC, allows "out of band" access to a server. This means that, via an IPMI interface, a user can remotely connect to a server regardless of it's power state and sensor data. The BMC is isolated from the hos...")
  • 01:42, 4 August 2023Anvil! Networking (hist | edit) ‎[16,350 bytes]Digimer (talk | contribs) (Created page with "{{howto_header}} The Anvil! Cluster implements four main network types, each of which there could be one or more of. {| class="wikitable" !style="white-space:nowrap; text-align:center;"|Network Name !style="white-space:nowrap; text-align:center;"|Prefix !style="white-space:nowrap; text-align:center;"|Subnet !style="white-space:nowrap; text-align:center;"|Used By !style="white-space:nowrap; text-align:center;"|Description |- !style="white-space:nowrap; text-align:l...")
  • 03:07, 27 July 2023Configuring Networking in RHEL9 (hist | edit) ‎[1,412 bytes]Digimer (talk | contribs) (Created page with "{{howto_header}} This tutorial covers a few different network configurations. This should be generally useful, but it is written with the Anvil! cluster member machine. Striker dashboards generally had two interfaces, directly configured. Anvil! nodes consist of two sub-nodes, acting in unison. Those sub-nodes have 8 interfaces, paired into four bonds. Two of those bonds will have IPs directly, and two will have bridge interfaces. DR hosts often match the hardwa...")
(newest | oldest) View ( | older 50) (20 | 50 | 100 | 250 | 500)