It looked so simple. Hundreds of articles out there on t'internet, why was it proving so hard to get KVM working with a bridged virtual network to my physical network? I went through some pain, so wrote down some notes...
If you just want your virtual machine to be isolated from all other networks, or you're happy using macvtap to share it out to the network, then you don't need to worry about network bridges and you won't be interested in this article. If your virtual machine only ever needs to connect out, you can just use NAT mode. Likewise, macvtap can work really well to share expose the VM out on the physical network and share the physical network card in the host but there is one key limitation with macvtap - it doesn't allow communication between the host and the guest. Depending on your use case, this might be a bit of a showstopper.
If you want your VM to appear on the physical network with an IP address just like any other routed physical device, and therefore also be accessible by the host computer, then you need to create a network bridge and bind your VM's virtual network device to that bridge...
Fabian Lee has it right
Firstly, let me just say, this guy has it right in an article he wrote but there's a few precursor things I found along the way which may help you if you're stumbling when following his guide...
Netplan or ifup?
From 18.04 onwards, Ubuntu made a move away from the old way of configuring networks (/etc/network/interfaces) to using netplan. Either is fine but I kept with netplan as it does seem way more powerful than the older ifup-based configuration, so that's how things are documented below. I had an odd mix where both packages were on my box originally (possibly because, in a previous life, it was a 16.04 LTS server), so I removed all trace of the ifupdown package (sudo apt-get remove ifupdown).
networkd or NetworkManager?
Sack NetworkManager off right now. This was a large cause of my ballache because, while my box is predominantly a server, I do actually run gnome and a desktop on it for certain activities... and again, it had got a bit of a mix on it with both networkd and NetworkManager daemons installed on it (but the older, more server-focused networkd daemon was masked... disabled effectively).
Netplan supports to concept of 'renderers' which can create config for either of these two network manager daemons so I disabled all the NetworkManager services, enabled/unmasked networkd and set about configuring netplan with that in mind. Just pay attention to the 'renderer' directive in your netplan YAML config file; make sure you set it to networkd as per Fabian's example.
Disable/stop all systemd services related to NetworkManager (including NetworkManager-wait-online) as these don't support bridges on automatic start-up, but also, Network Manager tries to be helpful when networks go down or loose connection - it's really aimed at desktop users with wifi and so forth, if you're running a fairly static server like me then it really doesn't bring anything to the party and, when it tries to do its automatic helping business, it just ends up making things worse.
Enable/start networking.service and networkd-dispatcher.service
Understanding the bridge/NIC relationship
As Fabian's article sort of points out, in order to create a bridged network in KVM you need to create a bridge in your netplan config file, tied to the physical network card interface. Once that's done, forget the NIC interface. It's still listed in ifconfig/etc but it's not what you use from this point forward. Your host machine will set all it's network config against the bridge, not the physical NIC. Likewise, KVM will bind to the bridge as well - if you try to bind anything to the physical NIC you'll get warnings/errors equating to "in use".
Forget the KVM UI for virtual networks
No matter how much I tried, I could not create a virtual network in the KVM UI that would let me choose to bind to the br0 bridge device I'd created in the previous steps. When it came to the box where you choose what device to bind it to, it would never list the br0 device (maybe because it's not physical). The minute I took the command-line approach documented in Fabian's article, it worked first time. So, for the network bridge part at least, configure it using an XML file and the virsh net-* commands.
Once that's done, you can add a virtual network interface to your VM. Choose "Virtual Network 'host-bridge' : Bridge network" (or whatever you called your virtual network, of course) and set the device model to 'virtio'.
Once up and running, you can assign an IP (assuming you don't use DHCP on your network) and your VM will now be connected onto the physical LAN using the bridge device set up. Your host machine can see it and so can other machines on your LAN. Your VM is as reachable as you could ever want it!