Persim3D Research Project
The Persim3D project revolves around serving as a means for a simulator for human activities in pervasive spaces. This relates back to the notion of human-centric research which involves interactions between humans and smart environments. This sort of research relies heavily on data sets and this is where Persim3D comes in. Persim3D allows researchers to retrieve those datasets without having to obtain more expensive resources like a smart home. Thus, the project aims to benefit the social good and help a wide variety of target user groups. 
My Engineering Honors Thesis Publication on Persim3D: Link to Paper Here
My Engineering Research Symposium Poster Publication on Persim3D: Link to Poster Here
PLATFORM: Windows, Mac
ROLE: Technical Artist, 3D Artist
CLIENT: University of Florida, Computer & Information Sciences Department, Mobile & Pervasive Computing Lab
My work on the Persim3D project involves creating simulations via 3D modeling in Maya and animating of assets that will in turn promote a better and effective dataset for the research team. 
I worked on the concept, 3D model, and rig of the 'Ethan' character in the game and helped with programming the character's animations in Unity. I 3D modeled several of the furniture assets in the demo. Lastly, I also provided concept detail for the user interface of the simulation.
The problem area currently resides in making the Persim3D world look more realistic with a higher fidelity of what it stands right now. Currently 3D modeling and animations are created via software packings (such as Autodesk Maya and Blender), are quickly outputted from motion captures, or retrieved from open sources online. This is then exported from these packages and re-imported into the Unity game engine that contains the Persim3D project. By having researchers, pushing updates to the software via Github and pulling the newfound information and data from the server, they can additively construct more assets for Persim3D.
However, due to the constant moving of undergraduate researchers and ever growing bundles of work that many of us undergo as a student in college, and because there was always a strong learning curve for new research assistants that were unfamiliar with these programs to utilize a new game engine system and integration which isn’t readily taught in the main course work of their engineering and art degree programs or other disciplines, 3D modeling and animations were obtained from several sources, as mentioned above, like motion capture, creating assets from scratch, and from open source files.
But this is where the problem stems from. From prior experience and from further research on the project, I have found that utilizing multiple rigs for virtual characters from many different sources, always caused a confusion, rehabilitated progression to the project, and caused problems to the animation and typology errors to the virtual character as there may have been a shift of bones in the rig or armature of the character caused from the retargeting system of the game engine within Unity. Furthermore, utilizing multiple rigs to perform fully functional animations in a consistent pattern on the same character is not even allowed in 3D software packages, which is what students would utilize for the main creation of their animations, but surprisingly, Unity is a powerful tool and allowed us to do so.
After discovering this newfound information, I found that it was redundant to continue to modify any existing animations to make them a higher fidelity as not only was it a meticulous task, but also it would never solve the root problem. Even if one animation, for example, combing hair, was fixed to look correct in a 3D modeling package, it would still not look right in the Unity game engine because of the constant change from rig to rig. The problem needed to be solved from the rig and to have only one rig be utilized for the duration of a character’s animation. This allows for a clean and non-problematic solution to the problem.
By approaching the project again, I suggested that we aim to use strictly one rig for our virtual character, called ‘Ethan’ to ensure success. Due to the problems mentioned above and why we had to obtain rigs and animations from numerous sources, our team turned to Mixamo, a service provided by the Adobe Creative Cloud which is free to use with an account. From here, we could download high quality animations that are retargeted to our virtual 3D character and replace our current model with a higher fidelity one (Mixamo 3).
Even after doing so, it was odd because only half of the problems were solved within the Unity game engine. The visualizations were still not as high of a fidelity as our team had hoped for, although the look of the overall characters and some assets were improved. The same problem was still prevalent.
My proposed conjecture as a solution to the problem, which is discussed more openly in the results section, even though the model, rig, and animations were now utilizing the same rig, assumedly, for most things in Persim3D’s virtual game engine, the root model and animations for the idle states is provided by Unity itself for a default character. Although the default character was improved and replaced, I find that it was never replaced in the grounded state of the project which can still alter the results of the animation and cause the same issue of retargeting of bones.
Overall, this project was extremely rewarding to work on as I learned more about animation programming in Unity and configuring assets to work in a real-time environment like the GatorTech Smart-home. 
Back to Top