Andre Przywara | 15997ea | 2021-12-14 17:47:00 +0000 | [diff] [blame] | 1 | Calxeda Highbank/Midway board support |
| 2 | ===================================== |
| 3 | |
| 4 | The Calxeda ECX-1000 ("Highbank") and ECX-2000 ("Midway") were ARM based |
| 5 | servers, providing high-density cluster systems. A single motherboard could |
| 6 | host between 12 and 48 nodes, each with their own quad-core ARMv7 |
| 7 | processor, private DRAM and peripherals, connected through a high-bandwith |
| 8 | and low-latency "fabric" network. Multiple motherboards could be connected |
| 9 | together, to extend this fabric. |
| 10 | |
| 11 | For the purpose of U-Boot we just care about a single node, this can be |
| 12 | used as a single system, just using the fabric to connect to some Ethernet |
| 13 | network. Each node boots on its own, either from a local hard disk, or |
| 14 | via the network. |
| 15 | |
| 16 | The earlier ECX-1000 nodes ("Highbank") contain four ARM Cortex-A9 cores, |
| 17 | a Cortex-M3 system controller, three 10GBit/s MACs and five SATA |
| 18 | controllers. The DRAM is limited to 4GB. |
| 19 | |
| 20 | The later ECX-2000 nodes ("Midway") use four Cortex-A15 cores, alongside |
| 21 | two Cortex-A7 management cores, and support up to 32GB of DRAM, while |
| 22 | keeping the other peripherals. |
| 23 | |
| 24 | For the purpose of U-Boot those two SoCs are very similar, so we offer |
| 25 | one build target. The subtle differences are handled at runtime. |
| 26 | Calxeda as a company is long defunct, and the remaining systems are |
| 27 | considered legacy at this point. |
| 28 | |
| 29 | Bgilding U-Boot |
| 30 | --------------- |
| 31 | There is only one defconfig to cover both systems:: |
| 32 | |
| 33 | $ make highbank_defconfig |
| 34 | $ make |
| 35 | |
| 36 | This will create ``u-boot.bin``, which could become part of the firmware update |
| 37 | package, or could be chainloaded by the existing U-Boot, see below for more |
| 38 | details. |
| 39 | |
| 40 | Boot process |
| 41 | ------------ |
| 42 | Upon powering up a node (which would be controlled by some BMC style |
| 43 | management controller on the motherboard), the system controller ("ECME") |
| 44 | would start and do some system initialisation (fabric registration, |
| 45 | DRAM init, clock setup). It would load the device tree binary, some secure |
| 46 | monitor code (``a9boot``/``a15boot``) and a U-Boot binary from SPI flash |
| 47 | into DRAM, then power up the actual application cores (ARM Cortex-A9/A15). |
| 48 | They would start executing ``a9boot``/``a15boot``, registering the PSCI SMC |
| 49 | handlers, then dropping into U-Boot, but in non-secure state (HYP mode on |
| 50 | the A15s). |
| 51 | |
| 52 | U-Boot would act as a mere loader, trying to find some ``boot.scr`` file on |
| 53 | the local hard disks, or reverting to PXE boot. |
| 54 | |
| 55 | Updating U-Boot |
| 56 | --------------- |
| 57 | The U-Boot binary is loaded from SPI flash, which is controlled exclusively |
| 58 | by the ECME. This can be reached via IPMI using the LANplus transport protocol. |
| 59 | Updating the SPI flash content requires vendor specific additions to the |
| 60 | IPMI protocol, support for which was never upstreamed to ipmitool or |
| 61 | FreeIPMI. Some older repositories for `ipmitool`_, the `pyipmi`_ library and |
| 62 | a Python `management script`_ to update the SPI flash can be found on Github. |
| 63 | |
| 64 | A simpler and safer way to get an up-to-date U-Boot running, is chainloading |
| 65 | it via the legacy U-Boot:: |
| 66 | |
| 67 | $ mkimage -A arm -O u-boot -T standalone -C none -a 0x8000 -e 0x8000 \ |
| 68 | -n U-Boot -d u-boot.bin u-boot-highbank.img |
| 69 | |
| 70 | Then load this image file, either from hard disk, or via TFTP, from the |
| 71 | existing U-Boot, and execute it with bootm:: |
| 72 | |
| 73 | => tftpboot 0x8000 u-boot-highbank.img |
| 74 | => bootm |
| 75 | |
| 76 | .. _`ipmitool`: https://github.com/Cynerva/ipmitool |
| 77 | .. _`pyipmi`: https://pypi.org/project/pyipmi/ |
| 78 | .. _`management script`: https://github.com/Cynerva/cxmanage |