REVIEW: Frame.IO's Camera to Cloud & the Atomos Shogun CONNECT by Kenny McMillan


HomeHome / Blog / REVIEW: Frame.IO's Camera to Cloud & the Atomos Shogun CONNECT by Kenny McMillan

Apr 25, 2023

REVIEW: Frame.IO's Camera to Cloud & the Atomos Shogun CONNECT by Kenny McMillan

In which I get to test the new monitor/recorder from Atomos that enables

In which I get to test the new monitor/recorder from Atomos that enables Frame.IO's Camera to Cloud functionality.

Over the past year I’ve been testing out Frame IO's Camera 2 Cloud system in various situations, primarily using it in service our live event coverage for ProVideo, namely Cinegear, NAB, and Adobe MAX. During those events we’d been using the Teradek Cube 655 to transmit footage to our editor as it was shot, which worked but the Teradek was kind of finicky every time I tried to set one up anew as the Cube wasn't built for C2C originally. That was a sticking point for me, but the avalanche of Pros outweighed the minor inconvenience of a Con. Luckily, purpose-built hardware is now on the market and I got to test out one solution, the Atomos Shogun Connect.

If you’re unfamiliar, Camera 2 Cloud is a system in which you can film something, and a standard but specific piece of hardware you’ve likely already got on set (a Teradek, Monitor/Recorder, etc) will send proxies of that footage, near-instantly, to your editor who's got the Frame.IO C2C tab open in their NLE of choice. Every time you stop recording, about that length-of-time later your editor gets a notification that there's new footage downloading and it's available right there for them to start a cut. Once they get the original footage from set, they can simply re-link the clips as they normally would. Frame.IO has a technical guide if you’d like to check that out.This is bonkers for a couple reasons, but I always like quoting Michael Cioni who said "it makes the linear process of shooting and then editing parallel". Time is money in this industry right? Getting started on the rough cut while it's being shot is shaving a lot off the overall timeline without adding undue stress to the editor. I also love the idea of having another set of eyes, completely detached from the hectic nature of a set, watching each clip as it comes in, analyzing it, and letting the production know that something's wrong like missed focus, a bad clip for VFX, whatever the case may be, in real time. This also means other non-production-essential members of the team can remain remote and offer notes without having to be in Video Village or the DIT tent or what have you.

Now for our event coverage we wanted to be as close to "first to air" as possible, so we actually didn't even use the original camera negative, we went straight to the web with the proxies and guess what: it was fine. Since we were using my C500, all I had to do was set the camera to Rec709 (I actually sent a nice custom Viewing LUT that I had made down the HDMI cable, to "pre-grade" the footage) and made sure the microphones were on Channel 1&2 and monitoring correctly so the editor had both my and the Interviewee's audio on separate channels for processing purposes. As we were shooting for web and not even using my internal recording, the full-fat footage saved to the CFexpress cards and the internal 2K proxies to the SD card were nice redundant backups. If I wanted to get really crazy I could have had each clip save to both CFx cards, bringing the in-camera total of redundant clips up to 3 on top of the version in the cloud, giving me . That's job security.

The first time we did this we had the proxy encoder on the Teradek set to 15Mbps and we found that to be too big to transfer at a reasonable speed in such a spotty connection area as inside a convention center with a buncha wifi signals everywhere. We found that 10Mbps was a perfect balance of quality and speed, and later found that 8 or even 5Mbps was perfectly acceptable. Remember, these are simple interviews or to-camera pieces going straight to YouTube so we don't need 4K clarity or anything. That being said I did have a Pearlescent filter on the lens one time and the bloom looked kind of bad in that environment and didn't hold up amazingly once proxi-fied so maybe don't do that. In any case, if you go watch the videos yourself I bet you wouldn't notice any strange blocking or anything even remotely giving away that we were using such a low bitrate proxy. In some cases out of focus things really look very out of focus, but as long as that's the background and not someone's face it's fine. For this off-label use of the system it's a perfectly acceptable trade off… but what if you wanted those original camera negatives instead?Lucky you, the RED V-Raptor (and Fujifilm XH2 with a grip) can send .R3D files in up to 8K resolution and ProRes Proxies natively. That's right, if you really wanted to (and without any additional hardware beyond the camera itself) you could simply send those .R3Ds straight from the camera, wired or wirelessly, into your editors hands and now you’ve got an additional full-size backup to your cards and drives on set on top of everything else. DPs have never slept so soundly. Look forward to us testing that this year at NAB!For now though, I have a simple Canon C500mkII and I got to test the new Atomos Shogun Connect on it. Took us awhile but we’re here now.The Shogun Connect is similar to the Shogun 7 but has a new 0.2" smaller 1920×1200 325 PPI (Similar pixel density to an iPhone 11) "SuperAtom" IPS touchscreen that's about 500nits brighter than the original coming in at 2000nits (or they’re the same, their website gives conflicting info), and obviously has network capabilities. Also it would appear the Shogun 7 is P3 but the Connect is Rec709 so there is that. Both monitors can be calibrated using the regular hardware (in my case the i1 Display Pro) in conjunction with Atomos Calibrator.The device itself looks premium and started up fairly quickly, maybe a couple seconds. The touchscreen is responsive, nice and bright, the menu layout is intuitive and easy to use and it starts up basically ready to rock. All you have to do is make sure your input settings are correct (camera type, audio monitoring, timecode, etc) and that it's saving to the correct file type for your use case (which doesn't affect Camera2Cloud so anything works). Your options there are ProRes, ProRes RAW, DNxHR, DNxHD, and H.265 which should cover most of the bases for professional work.

As far as physical inputs you’ve got one HDMI and one 12G SDI port capable of up to 4:2:2 10bit, for outputs there's the same, and all can do conversion to the other, so the device is kind of a switcher as well. The SDI port will record Metadata for the file name from Arri and RED, but it and the HDMI port both will record white balance, exposure index, shutter speed/angle, F stop, ISO, gamma, and gamut when in RAW. I also found, somewhat related, that you can tag clips in the playback menu, like in/out, good/bad, what angle number it is, person/landscape, B-Roll, all kinds of stuff. I thought that was cool.

I’ll let you read up on the rest of the monitor's specific features at the end of the article, but safe to say it's a fantastic monitor/recorder in its own right with all the assistance features (Fase Color, Zebras, Waveform, Vectorscope, etc) one might need. Let's talk about Camera to Cloud.

Setting up C2C was dead simple, I just had to connect to a network (I opted for a hardwired connection) and take a picture of a QR code to connect the device to my Frame.IO account via Atomos Cloud. It only took me 3 minutes and "just worked" which was great to experience after having to fuss with the Teradek. From there I just picked my project ("C2CREVIEW") and started running some clips.

To test rough transfer times, I ran a short clip (19 seconds) and a longer clip (3m36s) and found that it took a little longer than the clip's runtime to transfer. In other words, the 19s clip queued in my Frame.IO account 11s after I hit "stop" on the camera, and was watchable a total of 28s after. The 3:36 clips took 3 minutes to queue and was watchable 44 seconds later. We found these times to be relatively similar during our longer (10-20 minute) clips taken at the aforementioned conferences we covered and find that to be perfectly reasonable, although your connection speed will affect things, obviously. My home network has a 20MBps max upload speed and the wireless connection we were using in the field was actually much faster if I remember correctly. Spectrum Internet strikes me down once again.My download speed is actually 300MBps on a good day so that likely helps me get the footage off the cloud quickly in any case. The clips don't download to your PC until you tell it to, so all the previews you watch are in the cloud and therefore can be viewed quick as you like and don't start filling up drives suddenly without your knowledge or permission. As my download speed is excellent, telling Frame to download the files (in either Original, Proxy Highest or Proxy Lowest flavors) took around 1 second and downloaded in parallel as opposed to linearly in the folder I selected for them and were automatically added to my Project.

From there you just edit as usual, with clips in the Frame.IO panel being commented-and-drawn on as you’d expect, if you like. I got all 4 channels of audio from the C500 included in the proxy files but if you’re running separate audio you can either hop to the monitor via 3.5mm jack, to the camera itself if you can figure that out, or simply use a Sound Devices 888 or Scorpio (with everything properly timecoded) and those audio files can also be transmitted to the C2C project and will already be synced up with the video. Incredible.It's not just Sound Devices recorders or even just the Shogun or Teradek that work with Camera to Cloud; there's a ton of hardware and software that you can utilize, even down to the phone app Filmic Pro. You can add CDLs from Livegrade, Dailies from ShotPut Pro and Silverstack, Camera Reports with Zoelog, hell you can even integrate Hedge into your workflow, setting the Frame.IO project as a destination folder. It would appear more integrations are being added with some regularity so that's great to see. Soon enough I’ll be able to take one of these systems out on a narrative or commercial shoot, but for now this more academic review will have to suffice. I will say, the aforementioned "off-label" use of event coverage and just using the proxies as the "original" footage is an amazing tool that I can see plenty of people using, including certain news gathering teams. The nice thing about these devices is that the footage is still being recorded to the drive of the monitor (or the SD card, in the 655's case) so even if you lose connection the hardware will just wait for the connection to resume before continuing the upload. Also, a representative at Frame.IO told me there's actually 9 backups in the cloud being made before you get your clip, ensuring that your footage is safe. As it stands, I edit the vast majority of the content I shoot (I run a small production company, so that keeps costs down and income up ya know) but even so, something like Camera to Cloud would mean that I’ve got those files backed up at home before I get there, already organized. That isn't nothing! I was recently in a situation where both of my CFexpress cards corrupted (long story, they weren't from a known brand don't worry) and having that footage at home would have at least saved me some of the panic. That project was a web comedy shooting 1080p anyway so the proxies were at least somewhat close to what we would have used anyway. Pro tip: hook your card reader to a Chromebook or Linux device, somehow they can just read bad cards. Got my footage off them that way.

I feel like I’m too excited about this service to be objective, because I can't find anything I dislike about it. Connections are easily made, the hardware coming out is lovely, advancements and integrations are being made all the time… I mean, connection speeds certainly are a factor but that's essentially between you and your service provider. Obviously sending 8K R3D files over a cellphone hotspot would take a while, but at least internet speed is something that you can plan or accommodate for. As far as I’m concerned this technology will, and should, end up on every set with the ability to run it. Even if you don't have your editor right there salivating for the footage to pop into their bin, this will be huge for remote work as the editor doesn't have to be within a PA's gas tank of set to get going and all the drives can be consolidated and sent over as one package when the time comes. I mean heck, even if you don't have a live connection on set at all, once the shoot day's done you can just go back to your room, plug the Shogun into an ethernet cable (probably wall power too, to be safe) and it’ll get going with no input needed from you. If you’ve got an hour of footage on there, like I said, it’d probably take about an hour to transfer. That's not too bad. I’ve seen CF cards take longer on a solid USB connection.

If you have the main Adobe Creative Cloud "All Apps" subscription, or Premiere Pro/After Effects individually, you already have a Frame.IO account that includes two users, five projects, and 100GB of active storage on top of Camera to Cloud. Students and teachers on an Individual All Apps plan will also have access to for Creative Cloud. Everyone else will have to sign up for a Frame.IO account, and to get C2C you’ll need to at least go with the "Pro" tier. The Free tier includes the standard review tools on up to 2 projects, but you’ll need Pro, Team, or Enterprise to get C2C functionality.

Full Tech SpecsPhysical SpecificationsDimensions (W x H x D mm) 198mm / 7.8″ x 133mm / 5.2″ x 40mm / 1.57″Weight 730g / 1.6 lbs

Mounting pointsTop: 1 x 1/4″20 or 3/8″ with anti rotational points, Bottom: 1 x 1/4″20 or 3/8″ with anti rotational points, Right: 1 x 1/4″20 or 3/8″ with anti rotational pointsPowerInput voltage: 6.2 – 16.8v1 x NP-F / L-Series SlotLocking DC Jack (5.5mm OD & 2.1mm ID)Dtap via optional D-Tap to DC cable

Display7" Full HD 10-Bit (8+2 FRC) SuperAtom IPS panel (capacitive touch)325 PPIRec709, 2000nit +/- 10%, Edge-litCalibratable with ColorChecker Display Pro or Plus

Color FeaturesNearly every standard Log format and GamutLUT Support (33pt CUBE)Native / 709 / HLG / PQ / 3D LUT Monitoring Modes

Video Input1 x HDMI 2.01 x 12G SDI8/10 bit depth, 4:2:0 and 4:2:2

Video Output1 x HDMI 2.01 x 12G SDI1 x 12G SDIHDMI > SDI ConversionSDI > HDMI Conversion

MetadataHDMIRAW – white balance, Exposure index, Shutter speed/angle, Iris F stop, ISO, Gamma, gamut

SDI File Name from Arri, RED. RAW – white balance, Exposure index, Shutter speed/angle, Iris F stop, ISO, Gamma, gamut

Network1GbE RJ45 EthernetWi-Fi 6Wireless RF (AirGlu™ SYNC Time code Network)Atomos Cloud (Capture to Cloud, Live Stream)Connect to Hot Spot via Wi-Fi

Audio In/Out24-Bit / 48kHz PCMHDMI: 8ch 24Bit, input dependentSDI: 12ch 24Bit, input dependent3.5mm Jack In – 5v power3.5mm Jack Out + Headphone

Resolution and Frame Rates (Record, Monitor & Playback)4k DCI up to 4Kp604k UHD up to 4Kp602K DCI up to 2Kp60FHD up to 1080p1208K RAW up to 8Kp30 Via HDMI6K RAW up to 6Kp60 Via HDMI & 6Kp30 SDI4K RAW up to 4Kp120 via HDMI / SDI3.5K Anamorphic RAW up to 3.5Kp60 via HDMI2K RAW up to 2Kp240 via SDI

Recording CODECApple ProRes RAW up to 12bit 8Kp30 / 4Kp120Apple ProRes HQ / 422 / LT up to 4Kp60Avid DNxHD up to 1080p60Avid DNxHR up to 4Kp60H.265 C2C Proxy Recording included – Optional Activation for up to 4Kp60 (MOV container)Camera to Cloud Support from Adobe[Simultaneous proxy record ProRes RAW* / ProRes / DNx with H.265 proxy available in Future FW release]

Supported mediaSSDmini, CFast II via AtomX CFast Adapter, Master Caddy III

On screen toolsWaveform, RGB parade, Vector scope, 1x 2x Zoom, Focus peaking: Colour and slider for threshold, False colour With IRE scale, Zebra, Isolate color channel (Blue only)Safe Areas, Cine & Social Frame Guides (Preset or Custom)Anamorphic desqueeze (1.25x, 1.33x, 1.5x, 1.8x and 2x)Frame Grab and Onion Skin Overlay saving Lossless Full HD PNG

Order Today

"…to your editor who's got the Frame.IO C2C tab open in their NLE of choice." So Avid Media Composer has a C2C tab? Or should this sentence read "if you use Premiere or Resolve" or something like that?