October 26, 2011

When Geomagic meets Rapidform (part 1)

I was quite surprised to find out that apparently Rapidform considers Alias one of its main competitors (by the way, I’m not asserting anything, I’m just quoting a gossip which is a wrong thing to do, information should be checked before wrapping opinions about it, but I can’t help myself). It is hard for me to judge why so since I haven’t had the time to try out Alias for reverse engineering applications. Alias is an Autodesk product, highly sophisticated surfacing package that is capable of processing point clouds and meshes and use them for modifications of current design. Beyond that I know nothing of it.

So far I’ve been concerned with comparing Rapidform only with Geomagic for the simple reason that these two packages are available to me in their full glory. Autodesk Alias is available for free trial period download but whether it has its full promised functionality I unfortunately have no clue as of yet.

To be honest, with Rapidform and Geomagic doing what they are made for I see no point for involving a surfacing package (which is not a properly dedicated point processing, thus reverse engineering package) in the process. If Alias is extensively used within the organisation it makes sense not to invest in different package. But when you have a choice between Alias or Geomagic, or Rapidform and you don’t require A-class surfaces, what will you get?

I’ll drop Alias out of conversation for now and will focus on Geomagic vs. Rapidform battle. Actually, both packages are not really the same thing and probably aren’t true competitors but the task I need to handle can be tackled in either of them (with some obvious and not so obvious disadvantages).

I was given air inlet scans (not of very good quality) scanned by a handheld laser scanner to reverse engineer and preferably come up with an editable CAD model. When you have both Geomagic and Rapidform at your disposal there are several ways you can do that. I don’t have all the time in the world, so I didn’t try them all. Besides, some of them seemed like not a very good idea. To explain simply - the air inlet component consisted of a rectangular base with mounting holes and a hollow section that had rectangular shape at the bottom which transforms into something like oval at the top. This might sound very confusing, the picture below is the closest thing to it I can find as I’m not allowed to show the actual part. Our part had a bit more regularity to i’s hollow section and, to be honest, seemed like a very simple thing to remake in Rapidform XOR. The part was given back to the customer before I started work on the scans so all I had to work from really was a stack of points that were obviously a bit faulty. I wasn’t really upset about that, because, as I mentioned, it seemed pretty easy.

Air intake on the front of motorcycle

(this picture gives a pretty good idea of what is an air intake though our part had different geometry, the pic comes from http://www.600rr.net/vb/showthread.php?p=1370856)

So the first method is to do all of it from start till finish in Rapidform. Obviously, I didn’t do it because I have no luck in creating a mesh from scan data in Rapidform. I still haven’t figured out why and I’m still writing it down to lack of experience. Besides, I already mentioned that I prefer Geomagic mesh editing tools to Rapidform’s (despite Rapidform positioning itself as more user friendly and familiar to CAD users, I’m very surprised how “unfriendly” its mesh editing tools are; for example, mesh borders which are highlighted in Geomagic – a blue mesh has green highlighted borders which turn read when user chooses – but not in Rapidform which makes it very hard to find all holes that need to be filled in massive uni-colour mesh where “Fill all” option is not applicable; if it all comes down to some silly display settings, I’m sorry to be so picky about little things though I think mesh borders should be highlighted by default). Thus, all methods were mesh manipulations could be done in Rapidform were avoided. With my faulty scans I went straight into Geomagic Studio and tried to save what could be saved. When I call my scans faulty I mean they are not very accurate and even the most awesome mesh software in the whole universe won’t help to fix it. What would fix it however is to introduce the actual part to my callipers (I’m very proud of my callipers, it’s a great present from ex-colleagues) but the part went back to the customer. A lot of guessing got involved which made me really unhappy with the resulting part but more on the end model later.

Combining scans, cleaning it, meshing and editing is not worth discussing anymore – with Geomagic it’s like a second nature, click the icons without using much brain power. The route is always the same – first multiple scans are aligned using Local Alignment, then Global registration. From here there are two ways – either merge the scans and produce a mesh in one step or combine the scans without meshing. I prefer the second way because that gives me the chance to do some cleaning on the scan (like deleting outliners, cleaning noise and most importantly – fixing normals). I’m very pro-normal-fixing because without it the resulting mesh can have some serious defects which can only be fixed clean by deleting parts of the mesh and filling them like a hole (during which obviously original information is lost). What can happen is layers of mesh form on top of each other so the mesh model has sort of “flakes” over the surface. They can be eliminated by “filling holes” or “rewrapping” but often it results inside surface flipping on the outside and it’s something very hard to get rid of. Therefore, my advice – fix the normals.

Mesh Layers

(a layer of stray mesh can be seen inside the part. This defect resulted from scan error not from normal problem, but similar smaller layers appear if I don't do anything about normals)

I made a number of meshes – first was a reference mesh with all the features intact, but smooth, with clean edges and hole-free (apart from bottom and top opening, of course), second mesh with only free-form-ish part of component with bottom base and all mounting holes deleted; third and fourth meshes were the same as 1st and nd but thickened – the original part is a follow carbon fibre air intake which was scanned only from the outside. It looked to me as if the thickness was pretty much constant through the whole thing but the part was taken away before I could actually measure the thickness (which is only my own fault). I didn’t keep the carbon fibre texture as it wasn’t necessary.

While I was still in Geomagic I decided to make a surface model of air intake in case all my efforts in Rapidform fail. I used thickened mesh with all features intact and “Exact surfacing” option. Honestly, I really think I should get a grip of “Parametric surfacing” but what looks so easy in tutorials with parts specifically selected to demonstrate benefits of the option isn’t as easily done with my little training on randomly shaped parts. Anyway, surfacing gave the result I expected – regular features got robbed of their regularity and would be no good.

Air intake surfaced in Geomagic

(Can you see the mounting holes on the bottom of the base? They got seriously distorted during surfacing. Tinkering with the mesh might get a better result but it adds to processing time).

What used to be done is to process data capture info in Geomagic Studio and send surfaced components as IGES and STEP to customers who then (if they needed a CAD model) would use it as a template to rebuild the component in whatever CAD package they use (sending .stl is not always an option, as not all CAD packages can open them and even the ones that can aren’t capable of performing any actions on them apart from showing a point cloud in random space; there are add-ons that can help but they have very limited capability).

I also surface the fourth mesh – thickened but without features. I had a funch that the freeform part of component might not be so easy to rebuild in Rapidform manually and it will need autosurfacing and as so far Geomagic proved to be better at it. The idea was to import surfaced featureless body into Rapidform, align it with reference mesh and rebuild the rest of the features around it.

After covering all the possibilities in Geomagic Studio, I finally abandoned it with a pack of meshes and couple surface models and went on to explore what Rapidform is capable of.


September 20, 2011

First try with Geomagic Studio

Another reverse engineering software I had chance to try out is Geomagic Studio. If Rapidform attempts to resemble a CAD package, Geomagic Studio is more of an individual reverse engineering tool. Yet, learning to use it is not that hard. These days software interfaces are created for idiots – as long as you’re familiar with handful of terms you’ll be able to operate in any specific software. Switch from Rapidform to Geomagic happened in minutes without bothering to go through tutorials.

I was given a fresh mesh. Most often a component scan will be done in several parts. These parts are then imported into Geomagic Studio, aligned and merged together. The merged scan is than wrapped into a polygon mesh. All is done in few clicks of the mouse with a bit of common sense added to the whole process.

That’s how a “fresh” mesh is created. My first (and maybe not the best component for a novice to start with) was a motorcycle bodywork fairing. The fresh mesh looked like a very unsuccessful potato crisp – a smushed freeform pancake, with very ragged edges, holes all over, rough and defective surface. Original part was a smooth freeform with rounded corners. Such difference arises from scanning and honestly there’s very little what can be done about it. Obviously acquiring better scanning equipment comes as first solution but that investment (which wouldn’t be a minor one) can be avoided by having a decent processing tool and experienced software user.

I had at least one of those things – a decent processing tool Geomagic Studio. The first command thrown at fresh mesh was Mesh Doctor which gets rid of crossing polyvertices, dangling triangles and other things that mean very little to me. After that hole filling begins which can be a long, long, long process. Studio offers “Fill all” command which will automatically detect every hole and fill them with one click of the mouse. However, it will almost always go wrong as it will detect edges of the potato crisp as one big whole which it will try to fill, and if it won’t crash during that process the result will be not a bodywork panel any longer but a squashed muffin. There are other ways how to get rid of small-ish holes like “Rewrap” command and maybe others I haven’t discovered yet.

Anyway, after hole filling, Mesh Doctor again, Smoothing, Removing Spikes, Editing Boundaries, Defeaturing and any other command in Polygons tab that seems to be useful. The only drawback here is “undo” not being available. Well, you can cancel couple of last commands by Ctrl-Z but that doesn’t always work.

Fresh mesh turned into beautiful mesh then is surfaced. And the story ends here. I used Exact surfacing tab to do this. Unfortunately, haven’t had chance to apply Paramatric surfacing yet so I am not sure whether it would solve what I’m about to complain about further down.

Honestly, there’s not much to complain. Geomagic’s mesh editing tools work very well and are comfortable to use, most importantly – they are quick. I tried to re-work my potato crisp fairing in Rapidform as well, my god, it was choking and screaming and fainting and as soon as I turn my sight away from the screen even crashing. And I don’t see any reason for it apart from Rapidform being a bit inferior in handling large scans. Rapidform and Geomagic run on the same computer (not simultaneously though). While Geomagic was cheerily skipping forward Rapidform was having a fit of retardness. In short, when it comes to mesh editing – Geomagic wins; when it comes to handling large scans – Geomagic wins again (yet, there shouldn’t be any difference, Rapidform case studies and tutorials talk about large data handling like architectural monuments and such, so I’m inclined to think that we weren’t particularly lucky with our version).

As for automatic surface generation I don’t have much to complain either. As long as mesh is well done surface will be generated without problems. The only thing that bugs me is that none of the surface patches will be truly flat even if there is a plat surface on the part – all of them will be curved which makes extraction of reference geometry a bit more time consuming. Unfortunately, this is not a software fault, simply the nature of NURBS (non-uniform rational basis spline) surfaces. Also, small features, for example, holes will most probably lose their regularity. Again, it is written down to quality of the mesh.

In fact, such small features freeform parts are a big problem. Initial scanning will always distort them: with hole example, it will definitely lose its regular circular shape, its centre and radius. Manual measurement arms are used then on the part to detect exact position of circle centre and its exact radius, then adding the hole to complete surface separately, most probably already in CAD package. This is something I was hoping we could avoid with Rapidform but unfortunately not so. If initial scan is not good enough small feature geometry will need to be added separately with manual measurement or measurement arm information. Otherwise, it is user’s best guess of what feature dimensions are.

Honestly, I thought that the one of the biggest issues with reverse engineering process are post-processing softwares where user’s tinkering can succeed or fail. However, without a good initial scan or point cloud no processing software or user’s tinkering will succeed. Post-processing softwares like Rapidform XOR and Geomagic Studio serve first of all as scan error correctors. Still, the more complicated is the part, the less errors one might be able to correct.


September 16, 2011

Very unexperienced first attempt with Rapidform XOR


Here comes the awkward part – because I’m not all that confident I can splash out information and pictures of the parts I’ve been working on I’m going to keep to fairly plain text which is not the best way to be interesting (well, reverse engineering isn’t exactly classified as “ interesting topic” when it comes up in pub conversations, does it?). Anyway, I will try to explain my first encounter with Rapidform XOR3 after I’ve heard so many awesome things about it.

I believed that, after going through all the tutorials I could find, I was satisfactory prepared to work on a simple-ish part. After all, Rapidform has the logic of any other CAD package with added functions for processing point clouds and a bit awkward feature building. But even for a new user it is fairly straight-forward if one has modest familiarity with digital design of components.

After data capture (which results in point cloud in our case) the route simply stated is:

- align scans (if there are several scans of the part)

- clean up noise and get rid of stray points

- wrap the point could to get a mesh

All of these constitute to Step 1 of RevEng process. With my first example all of this was done for me by my more experienced superior. I was simply given a mesh and told to re-make it into modifiable CAD model. What could be easier?

The component was called “fuel nose” and looked pretty simple in geometry – two cylinders on top of one another with fillets and chamfers and few holes. It was CT scanned because it has not-as-simple inside geometry and this inside geometry was the reason why this component could not be re-build old-school way by simply measuring it with hand.

I didn’t do any manipulations with the mesh (which can be done in Rapidform, like filling holes, smoothing it etc.) because first of all I didn’t know how (turns out pretty simple – go into mesh editing tools and do whatever you want) and also I was told that that mesh is as good as it can be. Because I needed this mesh simply as a reference I didn’t have any reason not to believe that.

One defect on the mesh was the very ragged bottom surface:

 A very ragged mesh

Such surface would be no good for automatic surfacing (where algorithms stich points to create a surface) and is mainly due to scanning issues. But I accepted it as it is since I didn’t know any better.

After I stopped looking pointlessly on the mesh I went to next step – autosegmentation. This is done to divide the mesh into regions which are used to extract reference geometry and features. But because I had a faulty mesh, regions were even faultier (I don’t think that’s an actual word).

Faulty regions

It wasn’t a big problem, as I wasn’t intending to use regions for feature extraction (because with my novice mind I didn’t know it is possible). At this point I needed regions only for reference geometry but the regions were not even good enough for that. The inside of the component which consisted of several cylindrical tunnels was recognised as one whole region (imagine a tree with 5 branches as one whole region, extracting a vector that goes through the trunk of the of the tree would be impossible, to do that, the trunk needs to be a separate region). It is possible to manually separate the regions which I did as best as I could (when, after absorbing some more Rapidform practice, I decided to redo this part, I simply fixed the mesh because it actually wasn’t as good as it could be and tinkered with autosegmentation settings and got “trunk and branches” separated without a sweat).

Anyway, with swearing I managed to get all necessary reference geometry (vector through the part, vector through each hole (8 of them), a plane for the part and plane for each hole – 9 in total). The main body of the part was done as Revolve feature, the holes were also done as revolves even though I’d prefer them to be extruded. Those holes were quite a curious issue - even though they obviously are cylindrical holes they were detected as plane regions and I really fail to understand logic behind it. No mesh fixing or autosegmentation options changed that. It didn’t really create any problems for me. I’m just baffled – why?

There is a better way how to create such ambiguous holes in Rapidform – simply by extracting a what is called “primitive surface” from the hole region and then using this extracted surface to trim the body. No reference geometry that needed for the holes and it doesn’t matter what Rapidform detects the region to be. You just say that this region will be a cylinder and Rapidform quietly extracts it.

My first attempt to rebuild the part lasted a whole day because I had very little idea of what Rapidform could do. My second attempt – maybe 2.5 h of which most part was waiting for the software to do its processes like healing or optimizing the mesh.

Since this first try I’ve been stuck in Rapidform for two weeks straight so I’d like to believe I learned something. After fighting for a day with the first part I understood one thing – whether it’s Rapidform or anything else, if one wants to get well digitalised part point clouds must be impossibly perfect. Rapidform is capable of giving a good template for a revolve sketch from mesh perimeter but it is still a bunch of awkwardly connected scattered points instead of straight-forward geometry. So due to such scan quality, where I tried to stick a straight line over ragged mesh line I just added to the error (hopefully, not too much). I’m quite happy how I rebuild the fuel nose with the second try though I must be honest – if I had that part now I would attack it with callipers to make digitised model even better.



September 14, 2011

Getting my hands dirty with Rapidform XOR

Without any preludes I’m going to jump straight to the case. Lately I’ve been introduced to Rapidform XOR3 which is a point cloud/scan processing software that allows to take output from data capture technology and modify it to create a CAD model (or keep it as a mesh if someone preferred).

Rapidform package is praised because it offers something that is close to usual CAD packages like Solidworks or Inventor so the user doesn’t need to learn a new language. Yet, I’d say some decent training is required. Like with any CAD package (strictly speaking, Rapidform is not a CAD package but surely tries to persuade you that it looks like one) it allows to create features like extrudes, revolves, holes and accepts geometry editing information from the user like dimensions (other point processing packages are fairly limited at this point as far as I’m aware, feature generation is done mostly automatically to fit the scan and the user can’t really change any dimensions).

Describing it is all nice and jolly but vague. Here is a good example of Rapidform at work:

By the way, Youtube is invaluable when it comesto learning a new program: decent tutorials are available for Rapidform, Geomagic (another point processing software which I’ll talk about some other time), various Autodesk packages, Solidworks, CATIA etc. I’m not going to sing ballads to benefits of new media like Youtube right now, I’ll leave it for lazy afternoon when I’m retired. Just saying – don’t know what to do, go to Youtube.

Anyway, after going through bunch of background info, training notes and tutorials I felt like I have pretty good idea on how I can get what I want from Rapidform. It seemed to be created just to solve all the problems I might encounter and I was really eager to prove it right.

Here’s the thing – every software supplier will be screaming till he’s blue that his package is the best there is for anything a feeble human mind can imagine. But when they make up demonstrations or tutorials they obviously choose examples that will work for sure. So their opinion is not one to bet money on. Yet, Rapidform is not so widespread to find independent opinion on it easily. There is a study that (as Rapidform marketing claims) shows that Rapidform is the best available solution which was done by an independent (as I think) academic team. Well, I have read that study. It does talk about benefits of Rapidform over other packages in specific task but in no way states that it is the god of point processing.

Here’s a link to it (link to download study in article).

The reason why I’m being so bitter about Rapidform is my very short experience with it. I’ll admit, I might be barking at the wrong tree, after all I’m a new user (two weeks is no experience, even though I might be close to 50 h of work on it which is more than I had on Solidworks over two first years of education, that’s a sad fact).

Still, if what I have experienced so far can be considered some sort of tag for Rapidform and it is considered one of the best out there than I must say – there’s still a long way to go.

We’ll see what kind of song I’ll be singing in a year’s time.

September 08, 2011

RevEng Process

Before I go into detail on some specific tools used for reverse engineering purposes in WMG I decided to lay down a simple outline of how the whole RE process is carried out. In literature, the steps of the processes, terms and definitions vary, not to mention that many sources go into great deal of detail which serves more as source of confusion rather than information. Now that I’ve successfully untangled myself from all misunderstandings I can write down a cropped down summarized version of very generalized RE process. I’ve divide the whole process in four steps, not all of them compulsory for all applications, but from looking into many, many case studies concluded to be very common.

The first and always present step is physical object data capture. Here an object geometry features are detected by contact or non-contact techniques and the output from them is used as a reference to rebuild the object in virtual environment. Tools available for this step are abundant – starting from a measuring tape (very stone-age-like approach but surprisingly still used for simple-ish geometry parts, not for motorsport applications, I hope) going all the way to high precision and costly laser scanners. Main non-contact devices exploit laser beams, structured light, ambient radiation for feature detection as well as such techniques as, for example, Computed Tomography or MRI. Contact techniques rely on touch probes that sample points on object geometry. CMM (Coordinate Measuring Machine) is one of the most common contact measuring devices and comes in variety of forms in terms of sizes and available probes. Any of these techniques will produce raw data that most probably is of little use in form of point clouds or STL (for CT).

Hence, the second step – post-processing – is necessary to prepare raw captured data for use. Most common data type – a point cloud – will be cleaned up of any stray points, refined, in some cases even reduced in point quantity to be used further. Such manipulations can be done with software packages build in data capture technologies or in separate packages. In some cases point clouds are sufficient input for next application, for instance, some CNC machining. However, for most applications more work is required to transform captured data into appropriate format.

Then the third and the longest step that requires experience and practice starts – captured and refined data transformation. For different applications required end result will vary: for some it is enough to extract a mesh from point data (whihc would still be in STL format and of little use in many packages; what's STL, you ask - google it!), while for others a fully rebuild CAD model with feature tree that can be easily modified in many wide used CAD packages is required. Available software packages on the market today can offer a variety of end results and quality. It is a huge topic to discuss and have been my focus so far, so I’ll go in depth on it some other time.

The final very important step is validation which involves verifying that recreated digital representation is accurate. Methods here are also very versatile and can be as simple as accuracy analyser build in CAD packages that require one click from the user to produce a result. It sounds like a common procedure that does not deserve to be defined as step. Yet, it significant enough to be highlighted a, sadly, often not enough attention is paid to it.

This outline is very generalized. So many hardware and software tools are available that it would extend a this single blog entry into a book to mention all that can be applied in each step (and I would leave out most of it because I’m pretty sure things exist that I’m not aware of and have no idea how to find it). These are the steps that I follow as for my purposes a good CAD model is required result from RE processes. Further, I will go into greater detail of actual CMMs or scanners, or software that I had chance to get familiar with (and at this point already get very annoyed with; at the age where a computer is almost capable of replacing human beings and CAD software pretty much does anything for you, there are still so many nuisances that we are bound to deal with). Mainly I’m occupied with captured data transformation (which is eating up a lot of my time, did I mention that it’s the longest step?) where a single tiny part can be recreated in million ways – I just need to find the best one.


August 25, 2011

Why RevEng for Motorsport

Any racing discipline uses a vehicle that consists of hundreds of components – that is nothing new. Whether it’s a motorbike, an open-wheel or a stock car a plethora of components working together drag it from start to finish. And each component to some extent requires an adequate engineering brain to make it.

Designing life has been made substantially easier by existence of CAD packages. However, often an object exists without related CAD data being available. It might that the object was created before CAD came into use, as one-off or manufacturer of the object is not able or willing to share this information. Such occurrences are common in the racing world. When designers are trying to fit off-shelf components with customized parts in tight time constraints, their hair might go gray.

This is where reverse engineering - recreation of real life object in CAD environment - comes in. In simple words, reverse engineering involves scanning object’s geometry and processing scan data to make it into useful CAD model. I’m going to be brave and say that majority of racing disciplines sooner or later will require reverse engineering services. Today many hardware and software solutions are available to carry out reverse engineering steps but no single one is universal, relatively cheap or time-efficient. Therefore, it is essential to clearly define what result needs to be delivered by reverse engineering to identify the best tools do it.

In essence, this is what I am doing – understanding reverse engineering requirements and defining routes on how this can be achieved. Due to a very short timescales (only 6 weeks) I’m not intending to invent “new electricity”. My aim is to find out how specific people can benefit from reverse engineering, what result they need to be delivered to reach their objectives and how can this be achieved by using the kit available in WMG (and it’s a fairly fancy kit I must admit).

There is a lot to learn and a lot to find out. Reverse engineering is a bit of a gray area which needs some colour added to it badly.

One final thing – I have this silly habit of giving one word names to projects I work on and most of the time for no good reason. I named one of the projects I worked on in previous job SPARTA and got tired of answering “Because I can!” when asked “Why SPARTA?” (and sadly the name didn’t stick). For this project I chose RevEng which seemed a decent enough abbreviation but carries no well-justified idea behind it. Actually I tried to do something similar to what NASA did with naming Mercury space probe MESSENGER (Mercury Surface, Space ENvironment, GEochemistry and Ranging) but I just don’t have the imagination. RevEng it is.

(“RevEng for Motorsport” sounds a bit like a title to a very wrongly twisted movie. But they make movies about a tyre that falls in love and explodes things which is bad enough.)


May 2020

Mo Tu We Th Fr Sa Su
Apr |  Today  |
            1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

Galleries

Tags

Blog archive

Loading…

Search this blog

Not signed in
Sign in

Powered by BlogBuilder
© MMXX